Cybersecurity – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 06 May 2020 14:18:18 +0000 en-US hourly 1 6772528 Biased AI is More Than a Technical Problem: Building a Process-oriented Policy Approach to AI Governance https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/ https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/#respond Wed, 06 May 2020 14:18:17 +0000 https://techliberation.com/?p=76711 Image Credit: Police Science Innovation

[Co-authored with Walter Stover]

Artificial Intelligence (AI) systems have grown more prominent in both their use and their unintended effects. Just last month, LAPD announced that they would end their use of a predicting policing system known as PredPol, which had sustained criticism for reinforcing policing practices that disproportionately affect minorities. Such incidents of machine learning algorithms producing unintentionally biased outcomes have prompted calls for ‘ethical AI’. However, this approach focuses on technical fixes to AI, and ignores two crucial components of undesired outcomes: the subjectivity of data fed into and out of AI systems, and the interaction between actors who must interpret that data. When considering regulation on artificial intelligence, policymakers, companies, and other organizations using AI should therefore focus less on the algorithms and more on data and how it flows between actors to reduce risk of misdiagnosing AI systems. To be sure, applying an ethical AI framework is better than discounting ethics all together, but an approach that focuses on the interaction between human and data processes is a better foundation for AI policy.

The fundamental mistake underlying the ethical AI framework is that it treats biased outcomes as a purely technical problem. If this was true, then fixing the algorithm is an effective solution, because the outcome is purely defined by the tools applied. In the case of landing a man on the moon, for instance, we can tweak the telemetry of the rocket with well-defined physical principles until the man is on the moon. In the case of biased social outcomes, the problem is not well-defined. Who decides what an appropriate level of policing is for minorities? What sentence lengths are appropriate for which groups of individuals? What is an acceptable level of bias? An AI is simply a tool that transforms input data into output data, but it’s people that give meaning to data at both steps in context of their understanding of these questions and what appropriate measures of such outcomes are.

The Austrian school of economics is well-suited to helping us grapple with these kinds of less well-defined problems. Austrian economists levied a similar critique against mainstream economics, which treated economic outcomes as a technical problem to be solved with specific technical decisions. The Austrians stressed a principle of methodological individualism, which holds that socioeconomic outcomes are ultimately the products of individual decisions, and cannot be acted on directly by technocratic policymakers. Methodological individualism involves the recognition that individuals drive outcomes in two primary aspects: subjective interpretation of their environment, and through interaction with each other and that same environment. We can sum up application of these two aspects to AI systems in two questions: who gets the data, and where does the data go?

It matters who gets the data because the necessity of subjective interpretation will lead different people to reach separate conclusions about the same data. As an example, a set of data on financial variables such as defaults and debt repayment frequency combined with personal characteristics such as race and geographic locations may lead one person to label African-Americans as larger credit risks. Other individuals reading the same data, however, may arrive at a different conclusion: the patterns in this data stem from structural racism that has suppressed income of African American households compared to other households, and do not indicate that they are inherently riskier. The first interpretation would result in biased outcomes from an AI system used to generate predictions of credit risk based on that data, whereas the second interpretation might actually result in beneficial outcomes; for instance, an agency might offer with more lenient terms to these individuals.

The second question of where data goes depends on the interaction of individuals with each other and their environment, which drives the flow of data and also determines how that data is acted upon. In her book Weapons of Math Destruction, Cathy O’Neil offers a perfect example of this when analyzing what went wrong with the LAPD’s use of PredPol, which took in data on past crimes and used it to predict the geographic location of new crimes. Police forces took this data and increased their presence in hot spots of predicted crime, which resulted in a positive feedback loop of more crime data originating in that area (because of increased interaction between police officers and residents of that neighborhood in the form of increased arrests) generating more predictions of crime, leading to over-policing of minority groups. Ultimately, the data went to a police department that unintentionally increased arrests of minority groups.

Together, the subjectivity of data and the importance of interaction get at a core insight of Austrian economics that directly follows the principle of methodological individualism: context matters. If how data is interpreted and used differs from person to person, then the flows of data matter in who gets the data first and how they use it, potentially transforming the data before sending it on. Thinking along these lines shifts us away from focusing on building better, more ethical AI, and more towards trying to better understand the dynamics of data within a system: who is selecting which data to feed into an AI, what data the AI then generates, and most importantly, how that data is then acted upon and by whom. If we don’t take these matters into consideration, we risk myopically focusing on fixes to the AI that will not change outcomes. In the case of PredPol, for example, the AI could have been completely transparent, but the outcome would have been the same because of how police officers were acting on the output data according to their institutional context.

Some experts are already calling for more process-oriented AI governance approaches, including the EU’s High-Level Expert Group on AI and professional services network KPMG. Carolyn Herzog, general counsel and chair of an ethics working group, comes close to the approach we are advocating in stressing that “…data is the lifeblood of AI,” and that we must pay attention to “…issues of how that data is being collected, how it is being used, and how it is being safeguarded.” However, at present, this data-oriented approach is not represented clearly in U.S. policy. Recent AI policy movements, including ethical principles released by the Department of Defense and the Office of Management and Budget’s AI Guidelines, are a good first step but still emphasize the technology more than the data flows, and are limited to the government’s use of AI. Principle 9 of the guidelines, for instance, notes the importance of having controls to ensure “…confidentiality, integrity, and availability of the information stored, processed, and transmitted by AI systems,” but does not extend this to explicitly consider how the data is used after being transmitted.

Moreover, these proposals do not coherently lay out the relationship between data and AI outcomes because they do not give enough emphasis on where data goes and how it is used in context after being transmitted from the AI system. Returning to our earlier point, interactions matter. Take PredPol as an example. Even if we know how data was being collected, stored, and used by PredPol, and by the police department, these two pieces in isolation are not enough to understand the emergent outcome that results from the interaction between these two organizations. The critical driver is the feedback loop that emerges because of the data flows back and forth between PredPol and the police department. Current policy proposals risk overlooking this class of emergent AI outcomes by narrowly focusing on the AI and data practices of just one organization, rather than explicitly drawing our attention to how data circulates in the wider data ecosystem.

What’s needed is a process-oriented, systemic policy approach focused not just on AI, but how data is interpreted and used in context by individuals and organizations on the ground, and how these parties interact with each other. The NTIA would be a good convener for drafting this framework given their success in leading a multi-stakeholder process to build a framework for enhancing cybersecurity. NTIA can use the AI Now Institute’s algorithmic impact assessment as a blueprint. By building a voluntary framework for AI outcomes, the NTIA can serve a dual purpose. First, it can help ease worries over how to stay compliant with best practices; Second, it can help organizations safeguard against unwanted outcomes of AI systems, and more effectively identify and correct problems that do arise instead of depending on outside forensic data analysis after the fact. NTIA can help establish a common language of AI systems between public and private entities that gives concrete steps organizations can take to avoid these outcomes.

]]>
https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/feed/ 0 76711
11th Circuit LabMD Decision Rewrites FTC Unfairness Test – In Dicta? https://techliberation.com/2018/06/11/11th-circuit-labmd-decision-rewrites-ftc-unfairness-test-in-dicta/ https://techliberation.com/2018/06/11/11th-circuit-labmd-decision-rewrites-ftc-unfairness-test-in-dicta/#comments Mon, 11 Jun 2018 15:05:16 +0000 https://techliberation.com/?p=76275

Last week the U.S. Court of Appeals for the 11th Circuit vacated a Federal Trade Commission order requiring medical diagnostic company LabMD to adopt reasonable data security, handing the FTC a loss on an important data security case.  In some ways, this outcome is not surprising.  This was a close case with a tenacious defendant which raised important questions about FTC authority, how to interpret “unfairness” under the FTC Act, and the Commission’s data security program.

Unfortunately, the decision answers none of those important questions and makes a total hash of the FTC’s current unfairness law. While some critics of the FTC’s data security program may be pleased with the outcome of this decision, they ought to be concerned with its reasoning, which harkens back to the “public policy” test for unfairness that was greatly abused by the FTC in the 1970’s.

The most problematic parts of this decision are likely dicta, but it is still worth describing how sharply this decision conflicts with the FTC’s modern unfairness test.  The court’s reasoning could implicate not only the FTC’s data security authority but its overall authority to police unfair practices of any kind.

(I’m going to skip the facts and procedural background of the case because the key issues are matters of law unrelated to the facts of the case. The relevant facts and procedure are laid out in the decision’s first and most lucid section. I’m also going to limit this piece to the decision’s unfairness analysis. There’s more to say about the court’s conclusion that the FTC’s order is unenforceable, but this post is already long. Interesting takes here and here.)

In short, the court’s decision attempts to rewrite a quarter century of FTC unfairness law.  By doing so, it elevates a branch of unfairness analysis that, in the 1970s, landed the FTC in big trouble.  First, I’ll summarize the current unfairness test as stated in the FTC Act. Next, I’ll discuss the previous unfairness test, the trouble it caused, and how that resulted in the modern test. Finally, I’ll look at how the LabMD decision rejects the modern test and discuss some implications.

The Modern Unfairness Test

If you’ve read a FTC complaint with an unfairness count in the last two decades, you’re probably familiar with the modern unfairness test.  A practice is unfair if it causes substantial injury that the consumer cannot avoid, and which is not outweighed by benefits to consumers or competition.  In 1994, Congress codified this three-part test in Section 5(n) of the FTC Act, which reads in full:

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice [1] causes or is likely to cause substantial injury to consumers which [2] is not reasonably avoidable by consumers themselves and [3] not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination. [Emphasis added]

The text of Section 5(n) makes two things clear: 1) a practice is not unfair unless it meets the three-part consumer injury test and 2) public policy considerations can be helpful evidence of unfairness but are not sufficient or even necessary to demonstrate it. Thus, the three-part consumer injury test is centrally important to the unfairness analysis. Indeed, the three-part consumer injury test set out in Section 5(n) has been synonymous with the unfairness test for decades.

The Previous, Problematic Test for Unfairness

But the unfairness test used to be quite different.  In outlining the test’s history, I am going to borrow heavily from Howard Beales’ excellent 2003 essay, “The FTCs Use of Unfairness Authority: Its Rise, Fall, and Resurrection.” (Beales was the Director of the FTC’s Bureau of Consumer Protection under Republican FTC Chairman Timothy Muris.) Beales describes the previous test for unfairness:

In 1964 … the Commission set forth a test for determining whether an act or practice is “unfair”: 1) whether the practice “offends public policy” – as set forth in “statutes, the common law, or otherwise”; 2) “whether it is immoral, unethical, oppressive, or unscrupulous; 3) whether it causes substantial injury to consumers (or competitors or other businessmen).” …. [T]he Supreme Court, while reversing the Commission in Sperry & Hutchinson cited the Cigarette Rule unfairness criteria with apparent approval….

This three-part test – public policy, immorality, and/or substantial injury – gave the agency enormous discretion, and the FTC began to wield that discretion in a problematic manner. Beales describes the effect of the S&H dicta:

Emboldened by the Supreme Court’s dicta, the Commission set forth to test the limits of the unfairness doctrine. Unfortunately, the Court gave no guidance to the Commission on how to weigh the three prongs – even suggesting that the test could properly be read disjunctively.

The result was a series of rulemakings relying upon broad, newly found theories of unfairness that often had no empirical basis, could be based entirely upon the individual Commissioner’s personal values, and did not have to consider the ultimate costs to consumers of foregoing their ability to choose freely in the marketplace. Predictably, there were many absurd and harmful results.

According to Beales, “[t]he most problematic proposals relied heavily on ‘public policy’ with little or no consideration of consumer injury.”  This regulatory overreach triggered a major backlash from businesses, Congress, and the media. The Washington Post called the FTC the “National Nanny.” Congress even defunded the agency for a time.

The backlash prompted the agency to revisit the S&H criteria.  As Beales describes,

As the Commission struggled with the proper standard for unfairness, it moved away from public policy and towards consumer injury, and consumer sovereignty, as the appropriate focus…. On December 17, 1980, a unanimous Commission formally adopted the Unfairness Policy Statement, and declared that “[un]justified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria.”

This Unfairness Statement recast the relationship between the three S&H criteria, discarding the “immoral” prong entirely and elevating consumer injury above public policy: “Unjustified consumer injury is the primary focus of the FTC Act, and the most important of the three S&H criteria. By itself it can be sufficient to warrant a finding of unfairness.” [emphasis added]  It was this Statement that first established the three-part consumer injury test now codified in Section 5(n).

Most importantly for our purposes, the statement explained the optional nature of the S&H “public policy” factor. As Beales details,

[I]n most instances, the proper role of public policy is as evidence to be considered in determining the balance of costs and benefits”  although ”public policy can ‘independently support a Commission action . . . when the policy is so clear that it will entirely determine the question of consumer injury, so there is little need for separate analysis by the Commission.’” [emphasis added]

In a 1982 letter to Congress, the Commission reiterated that public policy “is not a necessary element of the definition of unfairness.”

As the 1980s progressed, the Unfairness Policy statement, specifically the three-part test for consumer injury, “became accepted as the appropriate test for determining unfairness…” But not all was settled.  Beales again:

The danger of unfettered “public policy” analysis as an independent basis for unfairness still existed, however [because] the Unfairness Policy Statement itself continued to hold out the possibility of public policy as the sole basis for a finding of unfairness. A less cautious Commission might ignore the lessons of history, and dust off public policy-based unfairness. … When Congress eventually reauthorized the FTC in 1994, it codified the three-part consumer injury unfairness test. It also codified the limited role of public policy. Under the statutory standard, the Commission may consider public policies, but it cannot use public policy as an independent basis for finding unfairness. The Commission’s long and dangerous flirtation with ill-defined public policy as a basis for independent action was over.

Flirting with Public Policy, Again

To sum up, chastened for overreaching its authority using the public policy prong of the S&H criteria, the FTC refocused its unfairness authority on consumer injury.  Congress ratified that refocus in Section 5(n) of the FTC Act, as I’ve discussed above. Today, under modern unfairness law, FTC complaints rarely make public policy arguments and only then to bolster evidence of consumer injury.

In last week’s LabMD decision, the 11 th Circuit rejects this long-standing approach to unfairness. Consider these excerpts from its decision:

“The Commission must find the standards of unfairness it enforces in ‘clear and well-established’ policies that are expressed in the Constitution, statutes, or the common law.” “An act or practice’s ‘unfairness’ must be grounded in statute, judicial decisions – i.e., the common law – or the Constitution. An act or practice that causes substantial injury but lacks such grounding is not unfair within Section 5(a)’s meaning.” “Thus, an ‘unfair’ act or practice is one which meets the consumer-injury factors listed above and is grounded in well-established legal policy.”

And consider this especially salty bite of pretzel logic based on a selective citation of the FTC Act:

“Section 5(n) now states, with regard to public policy, ‘In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.’  We do not take this ambiguous statement to mean that the Commission may bring suit purely on the basis of substantial consumer injury. The act or practice alleged to have caused injury must still be unfair under a well-established legal standard, whether grounded in statute, the common law, or the Constitution.” [emphasis added]

Yet those two sentences in 5(n) are quite clear when read in context with the full paragraph, which requires the three-part consumer injury test but merely permits the FTC to consider public policies as evidence.  The court’s interpretation here is also undercut by the FTC’s historic misuse of public policy and Congress’s subsequent intent in Section 5(n) to limit the FTC overreach by restricting use of public policy evidence. Congress sought to restrict the FTC’s use of public policy; the 11th Circuit’s decision seeks to require it.

To be fair, the court is not exactly returning to the wild pre-Unfairness Statement days when the FTC thought public policy alone was sufficient to find an act or practice unfair.  Instead, the court has developed a new, stricter test for unfairness that requires both consumer injury and offense to public policy.

After crafting this bespoke unfairness test by inserting a mandatory public policy element, the decision then criticizes the FTC complaint for “not explicitly” citing the public policy source for its “standard of unfairness.”  But it is obvious why the FTC didn’t include a public policy element in the complaint – no one has thought it necessary, for more than two decades.  (Note, however, that the Commission’s decision does cite numerous statutes and common law principles as public policy evidence of consumer injury in this case.)

The court supplies the missing public policy element for the FTC: “It is apparent to us, though, that the source is the common law of negligence.” The court then determines that “the Commission’s action implies” that common law negligence “is a source that provides standards for determining whether an act or practice is unfair….”

Having thus rewritten the Commission’s argument and decades of FTC law, the court again surprises. Rather than analyze LabMD’s liability under this new standard, the court “assumes arguendo that the Commission is correct and that LabMD’s negligent failure to design and maintain a reasonable data security program invaded consumers’ right of privacy and thus constituted an unfair act or practice.”

Thus, the court does not actually rely on the unfairness test it has set out, arguably rendering that entire analysis dicta.

Why Dicta?

What is going on here? I believe the court is suggesting how data security cases ought to be pled, even though it cannot require this standard under Section 5(n) – and perhaps would not want to, given the collateral effect on other types of unfairness cases.

The court clearly wanted to signal something through this exercise.  Otherwise, it would have been much easier to have assumed arguendo LabMD’s liability under the existing three prong consumer injury unfairness test contained in the FTC’s complaint.  Instead, the court constructs a new unfairness test, interprets the FTC’s complaint to match it, and then appears to render its unfairness analysis dicta.

So, what exactly is the court signaling? This new unfairness test is stricter than the Section 5(n) definition of unfairness, and thus any complaint that satisfies the LabMD test would also satisfy the statutory test.  Thus, perhaps the court seeks to encourage the FTC to plead data security complaints more strictly than legally necessary by including references to public policy.

Had the court applied its bespoke standard to find that LabMD was not liable, I think the FTC would have had no choice but to appeal the decision.  By upsetting 20+ years of unfairness law, the court’s analysis would have affected far more than just the FTC’s data security program.  The FTC brings many non-data security cases under its unfairness authority, including illegal bill cramming and unauthorized payment processing and other types of fraud where deception cannot adequately address the problem. The new  LabMD unfairness test would affect many such areas of FTC enforcement. But by assuming arguendo LabMD’s liability, the court may have avoided such effects and thus reduced the FTC’s incentive to appeal on these grounds.

Dicta or not, appeal or not, the LabMD decision has elevated unfairness’s “public policy” factor. Given the FTC’s misuse of that factor in the past, FTC watchers ought to keep an eye out.

—-

Last week’s LabMD decision will shape the constantly evolving data security policy environment.  At the Charles Koch Institute, we believe that a healthy data security policy environment will encourage permissionless innovation while addressing real consumer harms as they arise.  More broadly, we believe that innovation and technological progress are necessary to achieve widespread human flourishing.  And we seek to foster innovation-promoting environments through educational programs and academic grant-making.

]]>
https://techliberation.com/2018/06/11/11th-circuit-labmd-decision-rewrites-ftc-unfairness-test-in-dicta/feed/ 1 76275
Nationalizing 5G networks? Why that’s a bad idea. https://techliberation.com/2018/01/29/nationalizing-5g-networks-why-thats-a-bad-idea/ https://techliberation.com/2018/01/29/nationalizing-5g-networks-why-thats-a-bad-idea/#comments Mon, 29 Jan 2018 17:49:39 +0000 https://techliberation.com/?p=76227

There was a bold, bizarre proposal published by Axios yesterday that includes leaked documents by a “senior National Security Council official” for accelerating 5G deployment in the US. “5G” refers to the latest generation of wireless technologies, whose evolving specifications are being standardized by global telecommunications companies as we speak. The proposal highlights some reasonable concerns–the need for secure networks, the deleterious slowness in getting wireless infrastructure permits from thousands of municipalities and counties–but recommends an unreasonable solution–a government-operated, nationwide wireless network.

The proposal to nationalize some 5G equipment and network components needs to be nipped in the bud. It relies on the dated notion that centralized government management outperforms “wasteful competition.” It’s infeasible and would severely damage the US telecom and Internet sector, one of the brightest spots in the US economy. The plan will likely go nowhere but the fact it’s being circulated by administration officials is alarming.

First, a little context. In 1927, the US nationalized all radiofrequency spectrum, and for decades the government rations out dribbles of spectrum for commercial use (though much has improved since liberalization in the 1990s). To this day all spectrum is nationalized and wireless companies operate at sufferance. What this new document proposes is to make a poor situation worse.

In particular, the presentation proposes to re-nationalize 500 MHz of spectrum (the 3.7 GHz to 4.2 GHz band, which contains mostly satellite and government incumbents) and build wireless equipment and infrastructure across the country to transmit on this band. The federal government would act as a wholesaler to the commercial networks (AT&T, Verizon, T-Mobile, Sprint, etc.), who would sell retail wireless plans to consumers and businesses.

The justification for nationalizing a portion of 5G networks has a national security component and an economic component: prevent Chinese spying and beat China in the “5G race.”

The announced goals are simultaneously broad and narrow, and at severe tension.

The plan is broad in that it contemplates nationalizing part of the 5G equipment and network. However, it’s narrow in that it would nationalize only a portion of the 5G network (3.7 GHz to 4.2 GHz) and not other portions (like 600 MHz and 28 GHz). This undermines the national security purpose (assuming it’s even feasible to protect the nationalized portion) since 5G networks interconnect. It’d be like having government checkpoints on Interstate 95 but leaving all other interstates checkpoint-free.

Further, the document author misunderstands the evolutionary nature of 5G networks. 5G for awhile will be an overlay on the existing 4G LTE network, not a brand-new parallel network, as the NSC document assumes. 5G equipment will be installed on 4G LTE infrastructure in neighborhoods where capacity is strained. As Sherif Hanna, director of the 5G team at Qualcomm, noted on Twitter, in fact, “the first version of the 5G [standard]…by definition requires an existing 4G radio and core network.”

https://twitter.com/sherifhanna/status/957891843533946880

The most implausible idea in the document is a nationwide 5G network could be deployed in the next few years. Environmental and historic preservation review in a single city can take longer than that. (AT&T has battled NIMBYs and local government in San Francisco for a decade, for instance, to install a few hundred utility boxes on the public right-of-way.) The federal government deploying and maintaining hundreds of thousands 5G installations in two years from scratch is a pipe dream. And how to pay for it? The “Financing” section in the document says nothing about how the federal government will find tens of billions of dollars for nationwide deployment of a government 5G network.

The plan to nationalize a portion of 5G wireless networks and deploy nationwide is unwise and unrealistic. It would permanently damage the US broadband industry, it would antagonize city and state officials, it would raise serious privacy and First Amendment concerns, and it would require billions of new tax dollars to deploy. The released plan would also fail to ensure the network security it purports to protect. US telecom companies are lining up to pay the government for spectrum and to invest private dollars to build world-class 5G networks. If the federal government wants to accelerate 5G deployment, it should sell more spectrum and redirect existing government funding towards roadside infrastructure. Network security is a difficult problem but nationalizing networks is overkill.

Already, four out of five [update: all five] FCC commissioners have come out strongly against this plan. Someone reading the NSC proposal would get the impression that the US is sitting still while China is racing ahead on 5G. The US has unique challenges but wireless broadband deployment is probably the FCC’s highest priority. The Commission is aware of the permitting problems and formed the Broadband Deployment Advisory Committee in part for that very purpose (I’m a member). The agency, in cooperation with the Department of Commerce, is also busy looking for more spectrum to release for 5G.

Recode is reporting that White House officials are already distancing the White House from the proposal. Hopefully they will publicly reject the plan soon.

]]>
https://techliberation.com/2018/01/29/nationalizing-5g-networks-why-thats-a-bad-idea/feed/ 1 76227
Liberty and Security in the Proposed Internet of Things Cybersecurity Improvement Act of 2017 https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/ https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/#comments Wed, 23 Aug 2017 18:22:04 +0000 https://techliberation.com/?p=76183

On August 1, Sens. Mark Warner and Cory Gardner introduced the “Internet of Things  Cybersecurity Improvement Act of 2017.” The goal of the legislation according to its sponsors is to establish “minimum security requirements for federal procurements of connected devices.” Pointing to the growing number of connected devices and their use in prior cyber-attacks, the sponsors aims to provide flexible requirements that limit the vulnerabilities of such networks. Most specifically the bill requires all new Internet of Things (IoT) devices to be patchable, free of known vulnerabilities, and rely on standard protocols. Overall the legislation attempts to increase and standardize baseline security of connected devices, while still allowing innovation in the field to remain relatively permissionless. As Ryan Hagemann[1] at the Niskanen Center states, the bill is generally perceived as a step in the right direction in promoting security while limiting the potential harms of regulation to the overall innovation in the Internet of Things.

The proposed legislation only creates such security requirements for the Internet of Things products purchased by the government. As a result, it does not directly affect the perceived market failure in securing the Internet of Things for either state and local governments or consumers. As a result, it is possible that either further state or federal legislation could develop different security norms in these areas or allow the market to sort out what level of security is needed in such products. Similarly, innovators might create different versions of products for consumers as opposed to the government if they found the security requirements of the federal procurement laws unnecessary. At the same time, consumers and other levels of government might reject such products if they feel they are less secure. For example, states and federal governments have independently developed their protocols and requirements for security in IT and Telecommunications services, and while all require some level of security, the exact requirements may vary. While most consumers still expect or opt in to some level of security for their personal computers, there are different expectations in security protocols for government and medical computer networks. A similar phenomena could emerge in the Internet of Things where the devices procured by the government are more secure than those available to the average consumer.

Defining and quantifying the Internet of Things can be difficult as new connected devices from toasters to teddy bears continue to arrive seemingly daily. As Ariel Rabkin discusses the bill defines the scope of devices covered in a broad ambiguous term of “Internet-connected device” which could cover not only new connected devices but much more mundane and common general purpose items such as laptops and smart phones. This ambiguity presents a serious concern regarding the proposed legislation. Given the security guidelines are being issued by the Office of Management and Budget in conjunction with each executive agency, we could see issues in agency’s use of soft law in an attempt to get Internet of Things entrepreneurs to adopt such standards beyond the items which the government procures. Because the items covered by the proposed legislation is ambiguous, it also raises concerns of what happens to emerging technologies such as connected cars where current security standards are already being discussed by agencies and devices such as laptops and cell phones where there are existing government and agency standards. If not clarified such a broad definition has potential to create uncertainty if the agency-based security standards for procurement. While initial standards are aimed at federal procurement, the delegation to agencies of these standards could lead to broader could lead to agency threats more generally in the Internet of Things and the use of government procurement standards as a type of soft law to influence the pace and course of innovation.

The proposed legislation provides a basic start on limiting the liability for Internet of Things researchers and systems security architects especially when coupled with existing intermediary protections. Unlike the FTC’s strict liability data security rules, the proposed legislation carves out safe harbors for both good faith security research and testing and updating the Digital Millennium Copyright Act (DMCA) and Computer Fraud and Abuse Act (CFAA) to have safe harbors provided the device was in compliance with the issued guidelines under the new legislation. This, however, creates questions of liability for non-federal government purchasers. First, if the devices fail to comply with the proposed standards in the consumer market could the presence of a more secure government alternative be used to support a design defect argument as the availability of a reasonable alternative design? And if not for an individual consumer, then what about a state or local government. Under the proposed legislation, merely not complying with standards in a consumer grade product does not seem likely to give rise to a case against an Internet of Things producer. The proposed legislation also does not appear to adequately address a safe harbor for insufficient fix or a latent defect. While these situations should not immediately find a company negligent, there are concerns that an inefficient patch might exacerbate rather than solve a problem.  It also does not address a possible situation where a third party fails to update the security measures or the government in some way modifies existing protocols on the device inadvertently changing existing security features.

In general, the Internet of Things Cybersecurity Act of 2017 provides a base level of security that could lead to greater adoption by government entities without disrupting the innovation in the consumer market. At the same time its broad definition of the Internet of Things risks potential soft law abuse and its specificity to government procurement limits its potential broader impact on IoT security. If passed, the Internet of Things Cybersecurity Act might lead to promotion of security across devices and broader innovation in such protocols without requiring such technology into captivity.

[1] Ryan provided feedback on an earlier draft of this post.

]]>
https://techliberation.com/2017/08/23/liberty-and-security-in-the-proposed-internet-of-things-cybersecurity-improvement-act-of-2017/feed/ 2 76183
Permissionless Innovation & Cybersecurity: Are They Compatible? https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/ https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/#respond Wed, 09 Mar 2016 16:58:00 +0000 https://techliberation.com/?p=76006

[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now.  To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]


 

Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals.[1] But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.

In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,”[2] a “cyber cold war,”[3] or even a “cyber 9/11.”[4] These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code.[5] Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”[6]

Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security.[7] Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”[8] Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security.

Meanwhile, similar concerns have already been raised about security vulnerabilities associated with the Internet of Things[9] and driverless cars.[10] Legislation has already been floated to address the latter concern through federal certification standards.[11] More broad-based cybersecurity legislative proposals have also been proposed, most notably the Cybersecurity Information Sharing Act, which would extend legal immunity to corporations that share customer data with intelligence agencies.[12]

Ironically, these efforts to expand federal cybersecurity authority come before the federal government has even gotten its own house in order. According to a recent report, federal information security failures had increased by an astounding 1,169 percent, from 5,503 in fiscal year 2006 to 69,851 in fiscal year 2014.[13] Of course, many of these same agencies would be tasked with securing the massive new datasets containing personally identifiable details about US citizens’ online activities that legislation like the Cybersecurity Information Sharing Act would authorize. In the worst-case scenario, such federal data storage could counterintuitively encourage more attacks on government systems.

It’s important to put all these security issues in some context and to realize that proposed legal remedies are often inappropriate to address online security concerns and sometimes end up backfiring. In his research on the digital security marketplace, my Mercatus Center colleague Eli Dourado has illustrated how we are already able to achieve “Internet Security without Law.”[14] Dourado documented the many informal institutions that enforce network security norms on the Internet to show how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado says.[15]

For example, a diverse array of computer security incident response teams (CSIRTs) operate around the globe, sharing their research on and coordinating responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies work with these CSIRTs and other individuals and organizations to address security vulnerabilities.

Encouraging the development of robust and lawful software vulnerability markets would provide even more effective cybersecurity reporting. Some private companies and nonprofit security research firms have offered financial incentives for hackers to find and report software vulnerabilities to the proper parties for years now.[16] Such “bug bounty” and “vulnerability auction” programs better align hackers’ monetary incentives with the public interest. By allowing a space for security researchers to responsibly report and profit from discovered bugs, these markets dissuade hackers from selling vulnerabilities to criminal or state-backed organizations.[17]

A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments. “Corporations, including software vendors, antimalware makers, ISPs, and major websites such as Facebook and Twitter, are aggressively pursuing cyber criminals,” notes Roger Grimes of Infoworld.[18] “These companies have entire legal teams dedicated to national and international cyber crime. They are also taking down malicious websites and bot-spitting command-and-control servers, along with helping to identify, prosecute, and sue bad guys,” he says.[19] Meanwhile, more organizations are employing “active defense” strategies, which are “countermeasures that entail more than merely hardening one’s own network against threats and instead seek to unmask one’s attacker or disable the attacker’s system.”[20]

A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyber law centers and experts have also helped by creating projects like Stop Badware, which originated at Harvard University but then grew into a broader nonprofit organization with diverse financial support.[21] Meanwhile, informal grassroots security groups like The Cavalry have formed to build awareness about digital security threats among developers and the general public and then devise solutions to protect public safety.[22]

The recent debacle over the Commerce Department’s proposed new export rules for so-called cyberweapons provides a good example of how poorly considered policies can inadvertently undermine such beneficial emergent ecosystems. The agency’s new draft of US “Wassenaar Arrangement” arms control policies would have unintentionally criminalized the normal communication of basic software bug-testing techniques that hundreds of companies employ each day.[23] The regulators who were drafting the new rules had good intentions. They wanted to crack down on cyber criminals’ abilities to sell malware to hostile state-backed initiatives. However, their lack of technical sophistication led them to unknowingly write a proposal that would have compelled software engineers to seek Commerce Department permission before communicating information about minor software quirks. Fortunately, regulators wisely heeded the many concerned industry comments and rescinded the initial proposal.[24]

Dourado notes that informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions such as administrative regulatory regimes or punishing liability regimes. First, the informal cooperative approach “gives network operators flexibility to determine what constitutes due care in a dynamic environment.” “Formal legal standards,” by contrast, “may not be able to adapt as quickly as needed to rapidly changing circumstances,” he says.[25] Simply put, markets are more nimble than mandates when it comes to promptly patching security vulnerabilities.

Second, Dourado notes that “formal legal proceedings are adversarial and could reduce ISPs’ incentives to share information and cooperate.”[26] Heavy-handed regulation or threatening legal liability schemes could have the unintended consequence of discouraging the sort of cooperation that today alleviates security problems swiftly.

Indeed, there is evidence that existing cybersecurity law prevents defensive strategies that could help organizations to more quickly respond to system infiltrations. For example, some argue that private individuals and organizations should be allowed to defend themselves using special measures to expel or track system infiltrators, often called “hacking back” or “active defense.” Anthony Glosson’s analysis for the Mercatus Center discusses how the Computer Fraud and Abuse Act currently prevents computer security specialists from utilizing defensive hacking techniques that could improve system defenses or decrease the number of attempted attacks.[27]

Third, legal solutions are less effective because “the direct costs of going to court can be substantial, as can be the time associated with a trial,” Dourado argues.[28] By contrast, private actors working cooperatively “do not need to go to court to enforce security norms,” meaning that “security concerns are addressed quickly or punishment . . . is imposed rapidly.”[29] For example, if security warnings don’t work, ISPs can “punish” negligent or willfully insecure networks by “de-peering,” or terminating network interconnection agreements. The very threat of de-peering helps keep network operators on their toes.

Finally, and perhaps most importantly, Dourado notes that international cooperation between state-based legal systems is limited, complicated, and costly. By contrast, under today’s informal, voluntary approach to online security, international coordination and cooperation are quite strong. The CSIRTs and other security institutions and researchers mentioned above all interact and coordinate today as if national borders did not exist. Territorial legal system and liability regimes don’t have the same advantage; enforcement ends at the border.

Dourado’s model has ramifications for other fields of tech policy. Indeed, as noted above, these collaborative efforts and approaches are already at work in the realms of online safety and digital privacy. Countless organizations and individuals collaborate on educational initiatives to improve online safety and privacy. And many industry and nonprofit groups have established industry best practices and codes of conduct to ensure a safer and more secure online experience for all users. The efforts of the Family Online Safety Institute were discussed above. Another example comes from the Future of Privacy Forum, a privacy think tank that seeks to advance responsible data practices. The think tank helps create codes of conduct to ensure privacy best practices by online operators and also helps highlight programs run by other organizations.[30] Likewise, the National Cyber Security Alliance helps promote Internet safety and security efforts among a variety of companies and coordinates National Cyber Security Awareness Month (every October) and Data Privacy Day (held annually on January 28).[31]

What these efforts prove is that not every complex social problem requires a convoluted legal regime or heavy-handed regulatory response. We can achieve reasonably effective safety and security without layering on more and more law and regulation.[32] Indeed, the Internet and digital systems could arguably be made more secure by reforming outdated legislation that prevents potential security-increasing collaborations. “Dynamic systems are not merely turbulent,” Postrel notes. “They respond to the desire for security; they just don’t do it by stopping experimentation.”[33] She adds, “Left free to innovate and to learn, people find ways to create security for themselves. Those creations, too, are part of dynamic systems. They provide personal and social resilience.”[34]

Education is a crucial part of building resiliency in the security context as well. People and organizations can prepare for potential security problems rationally if given even more information and better tools to secure their digital systems and to understand how to cope when problems arise. Again, many corporations and organizations already take steps to guard against malware and other types of cyberattacks by offering customers free (or cheap) security software. For example, major broadband operators offer free antivirus software to customers and various parental control tools to parents. In the context of “connected car” technology, automakers have banded together to come up with privacy and security best practices to address worries about remote hacking of cars as well as concerns about how much data they collect about our driving habits.[35]

Thus, although it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks.[36] Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats through creative “bottom-up” solutions instead of innovation-limiting “top-down” regulatory approaches.


 

[1]    This section partially adapted from Adam Thierer, “Achieving Internet Order without Law,” Forbes, June 24, 2012, http://www.forbes.com/sites/adamthierer/2012/06/24/achieving-internet-order-without-law. The author wishes to thank Andrea Castillo for major contributions to this section.

[2]    See Richard A. Serrano, “Cyber Attacks Seen as a Growing Threat,” Los Angeles Times, February 11, 2011, A18. (“[T]he potential for the next Pearl Harbor could very well be a cyber attack.”)

[3]    Harry Raduege, “Deterring Attackers in Cyberspace,” The Hill, September 23, 2011, 11, http://thehill.com/opinion/op-ed/183429-deterring-attackers-in-cyberspace.

[4]    Kurt Nimmo, “Former CIA Official Predicts Cyber 9/11,” InfoWars.com, August 4, 2011, http://www.infowars.com/former-cia-official-predicts-cyber-911.

[5]    Rodney Brown, “Cyber Bombs: Data-Security Sector Hopes Adoption Won’t Require a ‘Pearl Harbor’ Moment,” Innovation Report, October 26, 2011, 10, http://digital.masshightech.com/launch.aspx?referral=other&pnum=&refresh=6t0M1Sr380Rf&EID=1c256165-396b-454f-bc92-a7780169a876&skip=; Craig Spiezle, “Defusing the Internet of Things Time Bomb,” TechCrunch, August 11, 2015, http://techcrunch.com/2015/08/10/defusing-the-internet-of-things-time-bomb.

[6]    “Morning Edition: Cybersecurity Bill: Vital Need or Just More Rules?” NPR, March 22, 2012, http://www.npr.org/templates/transcript/transcript.php?storyId=149099866.

[7]    Jerry Brito and Tate Watkins, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy” (Mercatus Working Paper No. 11-24, Mercatus Center at George Mason University, Arlington, VA, 2011).

[8]    Jane K. Cramer and A. Trevor Thrall, “Introduction: Understanding Threat Inflation,” in American Foreign Policy and the Politics of Fear: Threat Inflation Since 9/11, ed. A. Trevor Thrall and Jane K. Cramer (London: Routledge, 2009), 1.

[9]    Tufekci, “Dumb Idea”; Byron Acohido, “Hackers Take Control of Internet Appliances,” USA Today, October 15, 2013, http://www.usatoday.com/story/cybertruth/2013/10/15/hackers-taking-control-of-internet-appliances/2986395.

[10]   Ed Markey, Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, US Senate, February 2015, http://www.markey.senate.gov/imo/media/doc/2015-02-06_MarkeyReport-Tracking_Hacking_CarSecurity%202.pdf.

[11]   Ed Markey, “Markey, Blumenthal to Introduce Legislation to Protect Drivers from Auto Security and Privacy Vulnerabilities with Standards and ‘Cyber Dashboard,’” press release, February 11, 2015, http://www.markey.senate.gov/news/press-releases/markey-blumenthal-to-introduce-legislation-to-protect-drivers-from-auto-security-and-privacy-vulnerabilities-with-standards-and-cyber-dashboard.

[12]   Andrea Castillo, “How CISA Threatens Both Privacy and Cybersecurity,” Reason, May 10, 2015, https://reason.com/archives/2015/05/10/why-cisa-wont-improve-cybersecurity.

[13]   Eli Dourado and Andrea Castillo, “Poor Federal Cybersecurity Reveals Weakness of Technocratic Approach” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, June 22, 2015), http://mercatus.org/publication/poor-federal-cybersecurity-reveals-weakness-technocratic-approach.

[14]   Eli Dourado, “Internet Security without Law: How Security Providers Create Online Order” (Mercatus Working Paper No. 12-19, Mercatus Center at George Mason University, Arlington, VA, June 19, 2012), http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online.

[15]   Ibid.

[16]   Charlie Miller, “The Legitimate Vulnerability Market: Inside the Secretive World of 0-day Exploit Sales,” Independent Security Evaluators, May 6, 2007, http://www.econinfosec.org/archive/weis2007/papers/29.pdf.

[17]   Andrea Castillo, “The Economics of Software-Vulnerability Sales: Can the Feds Encourage ‘Pro-social’ Hacking?” Reason, August 11, 2015, https://reason.com/archives/2015/08/11/economics-of-the-zero-day-sales-market.

[18]   Roger Grimes, “The Cyber Crime Tide Is Turning,” Infoworld, August 9, 2011, http://www.pcworld.com/article/237647/the_cyber_crime_tide_is_turning.html.

[19]   Dourado, “Internet Security.”

[20]   Anthony D. Glosson, “Active Defense: An Overview of the Debate and a Way Forward,” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, August 10, 2015), http://mercatus.org/publication/active-defense-overview-debate-and-way-forward-guardians-of-peace-hackers-cybersecurity.

[21]   http://stopbadware.org.

[22]   https://www.iamthecavalry.org.

[23]   Andrea Castillo, “The Government’s Latest Attempt to Stop Hackers Will Only Make Cybersecurity Worse,” Reason, July 28, 2015, https://reason.com/archives/2015/07/28/gov-ploy-to-stop-hackers-will-backfire.

[24]   Russell Brandom, “The US is Rewriting its Controversial Zero-Day Export Policy,” The Verge, July 29, 2015, http://www.theverge.com/2015/7/29/9068665/wassenaar-export-zero-day-revisions-department-of-commerce.

[25]   Dourado, “Internet Security.”

[26]   Ibid.

[27]   Glosson, “Active Defense.”

[28]   Dourado, “Internet Security.”

[29]   Dourado, “Internet Security.”

[30]   Future of Privacy Forum, “Best Practices,” http://www.futureofprivacy.org/resources/best-practices/.

[31]   See http://www.staysafeonline.org/ncsam and http://www.staysafeonline.org/data-privacy-day.

[32]   Glosson, “Active Defense,” 22. (“The precautionary principle is especially inadvisable in the dynamic realm of tech policy, and until the ostensible harms of active defense materialize, the law should facilitate maximum innovation in the network security field.”)

[33]   Postrel, Future and Its Enemies, at 199.

[34]   Ibid., 202.

[35]   See Future of Privacy Forum, “Connected Cars Project,” accessed October 16, 2015, http://www.futureofprivacy.org/connectedcars; Auto Alliance, “Automakers Believe That Strong Consumer Data Privacy Protections Are Essential to Maintaining the Trust of Our Customers,” accessed October 16, 2015, http://www.autoalliance.org/automotiveprivacy. See also Future of Privacy Forum, “Comments of the Future of Privacy Forum on Connected Smart Technologies in Advance of the FTC ‘Internet of Things’ Workshop,” May 31, 2013, http://www.futureofprivacy.org/wp-content/uploads/FPF-Comments-Regarding-Internet-of-Things.pdf.

[36]   Adam Thierer, “Don’t Panic over Looming Cybersecurity Threats,” Forbes, August 7, 2011, http://www.forbes.com/sites/adamthierer/2011/08/07/dont-panic-over-looming-cybersecurity-threats.

 

]]>
https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/feed/ 0 76006
Autonomous Vehicles Under Attack: Cyber Dashboard Standards and Class Action Lawsuits https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/ https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/#respond Sat, 14 Mar 2015 13:06:08 +0000 http://techliberation.com/?p=75511

In a recent Senate Commerce Committee hearing on the Internet of Things, Senators Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) “announced legislation that would direct the National highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure our cars and protect drivers’ privacy.” Spurred by a recent report from his office (Tracking and Hacking: Security and Privacy Gaps Put American Drivers at Risk) Markey argued that Americans “need the equivalent of seat belts and airbags to keep drivers and their information safe in the 21st century.”

Among the many conclusions reached in the report, it says, “nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” This comes across as a tad tautological given that everything from smartphones and computers to large-scale power grids are prone to being hacked, yet the Markey-Blumenthal proposal would enforce a separate set of government-approved, and regulated, standards for privacy and security, displayed on every vehicle in the form of a “Cyber Dashboard” decal.

Leaving aside the irony of legislators attempting to dictate privacy standards, especially in the post-Snowden world, it would behoove legislators like Markey and Blumenthal to take a closer look at just what it is they are proposing and ask whether such a law is indeed necessary to protect consumers. For security in particular, there may be concerns that require redress, but if one looks at the report, it becomes apparent that it lacks a very important feature:: no specific examples of real car hacking are mentioned. The only examples illustrated in the report are described in brief detail:

An application was developed by a third party and released for Android devices that could integrate with a vehicle through the Bluetooth connection. A security analysis did not indicate any ability to introduce malicious code or steal data, but the manufacturer had the app removed from the Google Play store as a precautionary measure.

Great! The company solved the problem. What about the other instance cited in the report?

Some individuals have attempted to reprogram the onboard computers of vehicles to increase engine horsepower or torque through the use of “performance chips”. Some of these devices plug into the mandated onboard diagnostic port or directly into the under-the-hood electronics system.

So the only two examples of “car hacking” described in the Markey report are essentially duds. The first is a non-issue, since the company (1) determined there was little security risk involved and (2) removed the item from the market anyways, just to be sure. The second is, in a sense, hacking, but it is individual car owners doing it to their own cars. Neither of these cases appears to be sufficient grounds for imposing a set of arbitrary and, in many cases, capriciously anti-innovation approaches to privacy and data security in cars.

In the wake of the report’s release, this past Tuesday, March 10, General Motors, Toyota, and Ford were all hit with a nationwide class action lawsuit, alleging that the companies concealed “dangers posed by a lack of electronic security in a vast swath of vehicles.” Specifically, the lawsuit is aimed at the presence of controller area network (CAN) buses, which act as data hubs between the various electronic systems in a car. These systems are, indeed, susceptible to hacking, but no more than any personal computer that is connected to the Internet.

The trouble with this lawsuit, brought by the Stanley Law Group, is that it has not cited any specific harms that have occurred as a result of this “defect” (as a side note, saying a computer being susceptible to hacking constitutes a defect in design is the equivalent of saying an airplane that is susceptible to lightning strikes is fundamentally defective). Rather, the plaintiffs argue that “[w]e shouldn’t need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect.”

As Adam Thierer and I pointed out in our 2014 paper, Removing Roadblocks to Intelligent Vehicles and Driverless Cars:

Manufacturers have powerful reputational incentives at stake here, which will encourage them to continuously improve the security of their systems. Companies like Chrysler and Ford are already looking into improving their telematics systems to better compartmentalize the ability of hackers to gain access to a car’s controller-area-network bus. Engineers are also working to solve security vulnerabilities by utilizing two-way data-verification schemes (the same systems at work when purchasing items online with a credit card), routing software installs and updates through remote servers to check and double-check for malware, adopting of routine security protocols like encrypting files with digital signatures, and other experimental treatments. (pg. 40-41)

It’s always easy to see the potential for abuse and harm with any new emerging technology, but optimism and fortitude in the face of the uncertain is what helps society, and individuals, grow and progress. Car hacking, while certainly a viable concern, is not so ubiquitous that it necessitates a heavy-handed regulatory approach. Rather, we should permit various standards to emerge and attempt to deal with possible harms. In this way, we can experiment to properly determine what approaches work and what do not. Federal standards imposed from on high assume that firms and individuals are not capable of working through these murky issues. We should be a bit more optimistic about the human capacity for ingenuity and adaptability.

To end on something of a more optimistic note, Tom Vanderbilt of Wired magazine gives keen insight into the reality of regulating based on hypothetical scenarios:

Every scenario you can spin out of computer error – what if the car drives the wrong way – already exists in analog form, in abundance. Yes, computer-guidance systems and the rest will require advances in technology, not to mention redundancy and higher standards of performance, but at least these are all feasible, and capable of quantifiable improvement. On the other hand, we’ll always have lousy drivers.

 


 

Additional Reading 

]]>
https://techliberation.com/2015/03/14/autonomous-vehicles-under-attack-cyber-dashboard-standards-and-class-action-lawsuits/feed/ 0 75511
Mercatus Center Scholars Contributions to Cybersecurity Research https://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/ https://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/#comments Mon, 23 Feb 2015 16:46:00 +0000 http://techliberation.com/?p=75476

by Adam Thierer & Andrea Castillo

Cybersecurity policy is a big issue this year, so we thought it be worth reminding folks of some contributions to the literature made by Mercatus Center-affiliated scholars in recent years. Our research, which can be found here, can be condensed to these five core points:

1)         Institutions, societies, and economies are more resilient than we give them credit for and can deal with adversity, even cybersecurity threats.

See: Sean Lawson, “Beyond Cyber-Doom: Assessing the Limits of Hypothetical Scenarios in the Framing of Cyber-Threats,” December 19, 2012.

2)         Companies and organizations have a vested interest in finding creative solutions to these problems through ongoing experimentation and they are pursing them with great vigor.

See: Eli Dourado, “Internet Security Without Law: How Service Providers Create Order Online,” June 19, 2012.

3)         Over-arching, top-down “cybersecurity frameworks” threaten to undermine dynamism in cybersecurity and Internet governance, and could promote rent-seeking and corruption. Instead, the government should foster continued dynamic cybersecurity efforts through the development of a robust private-sector cybersecurity insurance market.

See: Eli Dourado and Andrea Castillo, “Why the Cybersecurity Framework Will Make Us Less Secure,” April 17, 2014.

4)         The language sometimes used to describe cybersecurity threats sometimes borders on “techno-panic” rhetoric that is based on “threat inflation.

See the Lawson paper already cited as well as: Jerry Brito & Tate Watkins “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” April 10, 2012; and Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” January 25, 2013.

5)         Finally, taking these other points into account, our scholars have conclude that academics and policymakers should be very cautious about how they define “market failure” in the cybersecurity context. Moreover, to the extent they propose new regulatory controls to address perceived problems, those rules should be subjected to rigorous benefit-cost analysis.

See: Eli Dourado, “Is There a Cybersecurity Market Failure,” January 23, 2012.

 

C2-Spending-and-Breaches_0Developing cybersecurity policies—like the White House’s “Securing Cyberspace” proposal and the Senate Intelligence Committee’s risen-from-the-grave Cybersecurity Information Sharing Act (CISA) of 2015—prioritize government-led “information-sharing” among federal agencies and private organizations as a one-stop technocratic solution to the dynamic problem of cybersecurity provision. But, as Eli and Andrea pointed out in a Mercatus chart series from this year, the federal government’s own success with internal information-sharing policies has been abysmal for decades.

The Federal Information Security Management Act of 2002 compelled federal investment in IT security infrastructure along with internal information-sharing of system breaches and proactive responses among agencies. Apparently, this has not worked like a charm. The chart shows that reported federal breaches have risen by over 1000% since 2006 despite spending billions of dollars on agency systems and information sharing capabilities over the same time.

Many of the same agencies who would be imbued with power to coordinate information-sharing among private and government entities through CISA and other cybersecurity proposals were responsible for coordinating threat-sharing on the federal level. These are the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS). Are we to believe these bodies will become magically efficient once they have more power to cajole the private sector?

Government Accountability Office (GAO) reports analyzing the failure of federal information security practices and threat coordination find that the technocratic solutions that look so perfectly rational and controlled on paper break down when imposed from above on employees that have no buy-in. The report concludes, “As we and inspectors general have long pointed out, federal agencies continue to face challenges in effectively implementing all elements of their information security programs.” Repeating the same failed policies in the private sector is unlikely to result in success.

Cybersecurity provision is too important of an issue to be left to brittle, technocratic policies with proven track records of failure. Rather, good cybersecurity policy will be grounded in an understanding of the incentives and norms that have allowed the Internet to develop and thrive as the system that it is today to target specific sources of failure.

Industry analyses find again and again that with cybersecurity, the problem exists between chair and keyboard—“human error,” not insufficient government meddling, is responsible for the vast majority of cyber incidents. Introducing more error-prone humans to the equation, as government cybersecurity plans seek to do, will only complicate the problem while neglecting the underlying factors that need addressing.

Cybersecurity will be an issue we continue to cover closely at the Mercatus Center Technology Policy Program.

]]>
https://techliberation.com/2015/02/23/mercatus-center-scholars-contributions-to-cybersecurity-research/feed/ 2 75476
The government sucks at cybersecurity https://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/ https://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/#comments Tue, 20 Jan 2015 21:19:11 +0000 http://techliberation.com/?p=75327

Originally posted at Medium.

The federal government is not about to allow last year’s rash of high-profile security failures of private systems like Home Depot, JP Morgan, and Sony Entertainment to go to waste without expanding its influence over digital activities.

Last week, President Obama proposed a new round of cybersecurity policies that would, among other things, compel private organizations to share more sensitive information about information security incidents with the Department of Homeland Security. This endeavor to revive the spirit of CISPA is only the most recent in a long line of government attempts to nationalize and influence private cybersecurity practices.

But the federal government is one of the last organizations that we should turn to for advice on how to improve cybersecurity policy.

Don’t let policymakers’ talk of getting tough on cybercrime fool you. Their own network security is embarrassing to the point of parody and has been getting worse for years despite spending billions of dollars on the problem.

C2-Spending-and-Breaches_0

The chart above comes from a new analysis on federal information security incidents and cybersecurity spending by me and my colleague Eli Dourado at the Mercatus Center.

The chart uses data from the Congressional Research Service and the Government Accountability Office to display total federal cybersecurity spending required by the Federal Information Security Management Act of 2002 displayed by the green bars and measured on the left-hand axis along with the total number of reported information security incidents of federal systems displayed by the blue line and measured by the right-hand axis from 2006 to 2013. The chart shows that the number of federal cybersecurity failures has increased every year since 2006, even as investments in cybersecurity processes and systems have increased considerably.

In 2002, the federal government created an explicit goal for itself to modernize and strengthen its cybersecurity infrastructure by the end of that decade with the passage of the Federal Information Security Management Act (FISMA). FISMA required agency leaders to develop and implement information security protections with the guidance of offices like the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS)—some of the same organizations tasked with coordinating information-sharing about cybersecurity threats with the private sector in Obama’s proposal, by the way—and authorized robust federal investments in IT infrastructure to meet these goals.

The chart is striking, but a quick data note on the spending numbers is in order. Both the dramatic increase in FISMA spending from $7.4 billion in FY 2009 to $12.8 billion in FY 2010 and the dramatic decrease in FISMA spending from $14.8 billion in FY 2012 to $10.3 billion in FY 2013 are partially attributable to OMB’s decision to change its FISMA spending calculation methodology in those years.

Even with this caveat on inter-year spending comparisons, the chart shows that the federal government has invested billions of dollars to improve its internal cybersecurity defenses in recent years. Altogether, the OMB reports that the federal government spent $78.8 billion on FISMA cybersecurity investments from FY 2006 to FY 2013.

(And this is just cybersecurity spending authorized through FISMA. When added to the various other authorizations on cybersecurity spending tucked in other federal programs, the breadth of federal spending on IT preparedness becomes staggering indeed.)

However, increased federal spending on cybersecurity is not reflected in the rate of cyberbreaches of federal systems reported by the GAO. The number of reported federal cybersecurity incidents increased by an astounding 1012% over the selected years, from 5,503 in 2006 to 61,214 in 2013.

Yes, 1012%. That’s not a typo.

C3b-Breaches-blue

What’s worse, a growing number of these federal cybersecurity failures involve the potential exposure of personally identifiable information—private data about individuals’ contact information, addresses, and even Social Security numbers and financial accounts.

The second chart displays the proportion of all reported federal information security incidents that involved the exposure of personally identifiable information from 2009 to 2013. By 2013, over 40 percent of all reported cybersecurity failures involved the potential exposure of private data to outside groups.

It is hard to argue that these failures stem from lack of adequate security investments. This is as much a problem of scale as it is of an inability to follow one’s own directions. In fact, the government’s own Government Accountability Office has been sounding the alarm about poor information security practices since 1997. After FISMA was implemented to address the problem, government employees promptly proceeding to ignore or undermine the provisions that would improve security—rendering the “solution” merely another checkbox on the bureaucrat’s list of meaningless tasks.

The GAO reported in April of 2014 that federal agencies systematically fail to meet federal security standards due to poor implementation of key FISMA practices outlined by the OMB, NIST, and DHS. After more than a decade of billion dollar investments and government-wide information sharing, in 2013 “inspectors general at 21 of the 24 agencies cited information security as a major management challenge for their agency, and 18 agencies reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting.”

This weekend’s POLITICO report on lax federal security practices makes it easy to see how ISIS could hack into the CENTCOM Twitter account:

Most of the staffers interviewed had emailed security passwords to a colleague or to themselves for convenience. Plenty of offices stored a list of passwords for communal accounts like social media in a shared drive or Google doc. Most said they individually didn’t think about cybersecurity on a regular basis, despite each one working in an office that dealt with cyber or technology issues. Most kept their personal email open throughout the day. Some were able to download software from the Internet onto their computers. Few could remember any kind of IT security training, and if they did, it wasn’t taken seriously.

“It’s amazing we weren’t terribly hacked, now that I’m thinking back on it,” said one staffer who departed the Senate late this fall. “It’s amazing that we have the same password for everything [like social media.]”

Amazing, indeed.

What’s also amazing is the gall that the federal government has in attempting to butt its way into assuming more power over cybersecurity policy when it can’t even get its own house in order.

While cybersecurity vulnerabilities and data breaches remain a considerable problem in the private sector as well as the public sector, policies that failed to protect the federal government’s own information security are unlikely to magically work when applied to private industry. The federal government’s own poor track record of increasing data breaches and exposures of personally identifiable information render its systems a dubious safehouse for the huge amounts of sensitive data affected by the proposed legislation.

President Obama is expected to make cybersecurity policy a key platform issue in tonight’s State of the Union address. Given his own shop’s pathetic track record in protecting its own network security, one has to ponder the efficacy and reasoning in his intentions. The federal government should focus on properly securing its own IT systems before trying to expand its control over private systems.

]]>
https://techliberation.com/2015/01/20/the-government-sucks-at-cybersecurity/feed/ 4 75327
Hack Hell https://techliberation.com/2014/12/31/hack-hell/ https://techliberation.com/2014/12/31/hack-hell/#respond Wed, 31 Dec 2014 19:24:58 +0000 http://techliberation.com/?p=75160

2014 was quite the year for high-profile hackings and puffed-up politicians trying to out-ham each other on who is tougher on cybercrime. I thought I’d assemble some of the year’s worst hits to ring in 2015.

In no particular order:

Home Depot: The 2013 Target breach that leaked around 40 million customer financial records was unceremoniously topped by Home Depot’s breach of over 56 million payment cards and 53 million email addresses in July. Both companies fell prey to similar infiltration tactics: the hackers obtained passwords from a vendor of each retail giant and exploited a vulnerability in the Windows OS to install malware in the firms’ self-checkout lanes that collected customers’ credit card data. Millions of customers became vulnerable to phishing scams and credit card fraud—with the added headache of changing payment card accounts and updating linked services. (Your intrepid blogger was mysteriously locked out of Uber for a harrowing 2 months before realizing that my linked bank account had changed thanks to the Home Depot hack and I had no way to log back in without a tedious customer service call. Yes, I’m still miffed.)

The Fappening: 2014 was a pretty good year for creeps, too. Without warning, the prime celebrity booties of popular starlets like Scarlett Johansson, Kim Kardashian, Kate Upton, and Ariana Grande mysteriously flooded the Internet in the September event crudely immortalized as “The Fappening.” Apple quickly jumped to investigate its iCloud system that hosted the victims’ stolen photographs, announcing shortly thereafter that the “celebrity accounts were compromised by a very targeted attack on user names, passwords and security questions” rather than any flaw in its system. The sheer volume produced and caliber of icons violated suggests this was not the work of a lone wolf, but a chain reaction of leaks collected over time triggered by one larger dump. For what it’s worth, some dude on 4chan claimed the Fappening was the product of an “underground celeb n00d-trading ring that’s existed for years.” While the event prompted a flurry of discussion about online misogyny, content host ethics, and legalistic tugs-of-war over DMCA takedown requests, it unfortunately did not generate a productive conversation about good privacy and security practices like I had initially hoped.

The Snappening: The celebrity-targeted Fappening was followed by the layperson’s “Snappening” in October, when almost 100,000 photos and 10,000 personal videos sent through the popular Snapchat messaging service, some of them including depictions of underage nudity, were leaked online. The hackers did not target Snapchat itself, but instead exploited a third-party client called SnapSave that allowed users to save images and videos that would normally disappear after a certain amount of time on the Snapchat app. (Although Snapchat doesn’t exactly have the best security record anyways: In 2013, contact information for 4.6 million of its users were leaked online before the service landed in hot water with the FTC earlier this year for “deceiving” users about their privacy practices.) The hackers received access to 13GB library of old Snapchat messages and dumped the images on a searchable online directory. As with the Fappening, discussion surrounding the Snappening tended to prioritize scolding service providers over promoting good personal privacy and security practices to consumers.

Las Vegas Sands Corp.:  Not all of these year’s most infamous hacks sought sordid photos or privateering profit. 2014 also saw the rise of the revenge hack. In February, Iranian hackers infiltrated politically-active billionaire Sheldon Adelson’s Sands Casino not for profit or data, but for pure punishment. Adelson, a staunchly pro-Israel figure and partial owner of many Israeli media companies, drew intense Iranian ire after fantasizing about detonating an American nuclear warhead in the Iranian desert as a threat during his speech at Yeshiva University. Hackers released crippling malware into the Sands IT infrastructure early in the year, which proceeded to shut down email services, wipe hard drives clean, and destroy thousands of company computers, laptops, and expensive servers. The Sands website was also hacked to display “a photograph of Adelson chumming around with [Israeli Prime Minister] Netanyahu,” along with the message “Encouraging the use of Weapons of Mass Destruction, UNDER ANY CONDITION, is a Crime,” and a data dump of Sands employees’ names, titles, email addresses, and Social Security numbers. Interestingly, Sands was able to contain the damage internally so that guests and gamblers had no idea of the chaos that was ravaging casino IT infrastructure. Public knowledge of the hack did not serendipitously surface until early December, around the time of the Sony hack. It is possible that other large corporations have suffered similar cyberattacks this year in silence.

JP Morgan: You might think that one of the world’s largest banks would have security systems that are near impossible to crack. This was not the case at JP Morgan. From June to August, hackers infiltrated JP Morgan’s sophisticated security system and siphoned off massive amounts of sensitive financial data. The New York Times reports that “the hackers appeared to have obtained a list of the applications and programs that run on JPMorgan’s computers — a road map of sorts — which they could crosscheck with known vulnerabilities in each program and web application, in search of an entry point back into the bank’s systems, according to several people with knowledge of the results of the bank’s forensics investigation, all of whom spoke on the condition of anonymity.” Some security experts suspect that a nation-state was ultimately behind the infiltration due to the sophistication of the attack and the fact that the hackers neglected to immediately sell or exploit the data or attempt to steal funds from consumer accounts. The JP Morgan hack set off alarm bells among influential financial and governmental circles since banking systems were largely considered to be safe and impervious to these kinds of attacks.

Sony: What a tangled web this was! On November 24, Sony employees were greeted by the mocking grin of a spooky screen skeleton informed they had been “Hacked by the #GOP” and that there was more to come. It was soon revealed that Sony’s email and computer systems had been infiltrated and shut down while some 100 terabytes of data had been stolen. The hackers proceeded to leak embarrassing company information, including emails in which executives made racial jokes, compensation data revealing a considerable gender wage disparity, and unreleased studio films like Annie and Mr. Turner. We also learned about “Project Goliath,” a conspiracy among the MPAA, Sony, and five other studios (Universal, Sony, Fox, Paramount, Warner Bros., and Disney) to revise the spirit of SOPA and attack piracy on the web “by working with state attorneys general and major ISPs like Comcast to expand court power over the way data is served.” (Goliath was their not-exactly-subtle codeword for Google.) Somewhere along the way, a few folks got wild notions that North Korea was behind this attack because of the nation’s outrage at the latest Rogen romp, The Interview. Most cybersecurity experts doubt that the hermit nation was behind the attack, although the official KCNA statement enthusiastically “supports the righteous deed.” The absurdity of the official narrative did not prevent most of our world-class journalistic and political establishment from running with the story and beating the drums of cyberwar. Even the White House and FBI goofed. The FBI and State Department still maintain North Korean culpability, even as research compiled by independent security analysts points more and more to a collection of disgruntled former Sony employees and independent lulz-seekers. Troublingly, the Obama administration publicly entertained cyberwar countermeasures against the troubled communist nation on such slim evidence. A few days later, the Internet in North Korea was mysteriously shut down. I wonder what might have caused that? Truly a mess all around.

LizardSquad: Speaking of Sony hacks, the spirit of LulzSec is alive in LizardSquad. On Christmas day, the black hat collective knocked out Sony’s Playstation network and Microsoft’s Xbox servers with a massive distributed denial of service (DDoS) attack to the great vengeance and furious anger of gamers avoiding family gatherings across the country. These guys are not your average script-kiddies. NexusGuard chief scientist Terrence Gareu warns the unholy lizards boast an artillery that far exceeds normal DDoS attacks. This seems right, given the apparent difficulty that giants Sony and Microsoft had in responding to the attacks. For their part, LizardSquad claims the strength of their attack exceeded the previous record against Cloudflare this February. Megaupload Internet lord Kim Dotcom swooped to save gamers’ Christmas festivities with a little bit of information age, uh, “justice.” The attacks were allegedly called off after Dotcom offered the hacking collective 3,000 Mega vouchers (normally worth $99 each) for his content hosting empire if they agreed to cease. The FBI is investigating the lizards for the attacks. LizardSquad then turned their attention to the TOR network, creating thousands of new relays and comprising a worrying portion of the network’s roughly 8,000 relays in an effort to unmask users. Perhaps they mean to publicize the networks’ vulnerabilities? The group’s official Twitter bio reads, “I cry when Tor deserves to die.” Could this be related to the recent PandoTor drama that reinvigorated skepticism of Tor? As with any online brouhaha involving clashing numbers of privacy-obsessed computer whizzes with strong opinions, this incident has many hard-to-read layers (sorry!). While the Tor campaign is still developing, LizardSquad has been keeping busy with it’s newly-launched Lizard Stresser, a distributed DDoS tool that anyone can use for a small fee. These lizards appear very intent on making life as difficult as possible for the powerful parties they’ve identified as enemies and will provide some nice justifications for why governments need more power to crack down on cybercrime.

What a year! I wonder what the next one will bring.

One sure bet for 2015 is increasing calls for enhanced regulatory powers. Earlier this year, Eli and I wrote a Mercatus Research paper explaining why top-down solutions to cybersecurity problems can backfire and make us less secure. We specifically analyzed President Obama’s developing Cybersecurity Framework, but the issues we discuss apply to other rigid regulatory solutions as well. On December 11, in the midst of North Korea’s red herring debut in the Sony debacle, the Senate passed the Cybersecurity Act of 2014, which contains many of the same principles outlined in the Framework. The Act, which still needs House approval, strengthens the Department of Homeland Security’s role in controlling cybersecurity policy by directing DHS to create industry cybersecurity standards and begin routine information-sharing with private entities.

Ranking Member of the Senate Homeland Security Committee, Tom Coburn, had this to say: “Every day, adversaries are working to penetrate our networks and steal the American people’s information at a great cost to our nation. One of the best ways that we can defend against cyber attacks is to encourage the government and private sector to work together and share information about the threats we face. ”

While the problems of poor cybersecurity and increasing digital attacks are undeniable, the solutions proposed by politicians like Coburn are dubious. The federal government should probably try to get its own house in order before it undertakes to save the cyberproperties of the nation. The Government Accountability Office reports that the federal government suffered from almost 61,000 cyber attacks and data breaches last year. The DHS itself was hacked in 2012,while a 2013 GAO report criticized DHS for poor security practices, finding that “systems are being operated without authority to operate; plans of action and milestones are not being created for all known information security weaknesses or mitigated in a timely manner; and baseline security configuration settings are not being implemented for all systems.” GAO also reports that when federal agencies develop cybersecurity practices like those encouraged in the Cybersecurity Framework or the Cybersecurity Act of 2014, they are inconsistently and insufficiently implemented.

Given the federal government’s poor track record managing its own system security, we shouldn’t expect miracles when they take a leadership role for the nation.

Another trend to watch will be the development of a more robust cybersecurity insurance market. The Wall Street Journal reports that 2014’s rash of hacking attacks stimulated sales of formerly-obscure cyberinsurance packages.

The industry had suffered in the past due to its novelty and lack of previous data to use to accurately price insurance packages. This year, demand has been sufficiently stimulated and actuaries have been familiar enough with the relevant risks that the practice has finally become mainstream. Policies can cover “the costs of [data breach] investigations, customer notifications and credit-monitoring services, as well as legal expenses and damages from consumer lawsuits” and “reimbursement for loss of income and extra expenses resulting from suspension of computer systems, and provide payments to cover recreation of databases, software and other assets that were corrupted or destroyed by a computer attack.” As the market matures, cybersecurity insurers may start more actively assessing firms’ digital vulnerabilities and recommend improvements to their systems in exchange for a lower premium payment, as is common in other insurance markets.

Still, nothing ever beats good old-fashioned personal responsibility. One of the easiest ways to ensure privacy and security for yourself online is to take the time to learn how to best protect yourself or your business by developing good habits, using the right services, and remaining conscientious about your digital activities. That’s my New Year’s resolution. I think it should be yours, too! :)

Happy New Year’s, all!

]]>
https://techliberation.com/2014/12/31/hack-hell/feed/ 0 75160
Adam Thierer on Permissionless Innovation https://techliberation.com/2014/05/13/thierer/ https://techliberation.com/2014/05/13/thierer/#respond Tue, 13 May 2014 10:00:30 +0000 http://techliberation.com/?p=74547

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

]]>
https://techliberation.com/2014/05/13/thierer/feed/ 0 74547
New Paper on the Cybersecurity Framework https://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/ https://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/#respond Thu, 17 Apr 2014 14:46:24 +0000 http://techliberation.com/?p=74409

Andrea Castillo and I have a new paper out from the Mercatus Center entitled “Why the Cybersecurity Framework Will Make Us Less Secure.” We contrast emergent, decentralized, dynamic provision of security with centralized, technocratic cybersecurity plans. Money quote:

The Cybersecurity Framework attempts to promote the outcomes of dynamic cybersecurity provision without the critical incentives, experimentation, and processes that undergird dynamism. The framework would replace this creative process with one rigid incentive toward compliance with recommended federal standards. The Cybersecurity Framework primarily seeks to establish defined roles through the Framework Profiles and assign them to specific groups. This is the wrong approach. Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts. What’s more, an assessment of DHS critical infrastructure categorizations by the Government Accountability Office (GAO) finds that the DHS itself has failed to adequately communicate its internal categories with other government bodies. Adding to the confusion is the proliferating amalgam of committees, agencies, and councils that are necessarily invited to the table as the number of “critical” infrastructures increases. By blindly beating the drums of cyber war and allowing unfocused anxieties to clumsily force a rigid structure onto a complex system, policymakers lose sight of the “far broader range of potentially dangerous occurrences involving cyber-means and targets, including failure due to human error, technical problems, and market failure apart from malicious attacks.” When most infrastructures are considered “critical,” then none of them really are.

We argue that instead of adopting a technocratic approach, the government should take steps to improve the existing emergent security apparatus. This means declassifying information about potential vulnerabilities and kickstarting the cybersecurity insurance market by buying insurance for federal agencies, which experienced 22,000 breaches in 2012. Read the whole thing, as they say.

]]>
https://techliberation.com/2014/04/17/new-paper-on-the-cybersecurity-framework/feed/ 0 74409
Jack Schinasi on global privacy regulation https://techliberation.com/2014/01/21/schinasi/ https://techliberation.com/2014/01/21/schinasi/#respond Tue, 21 Jan 2014 15:01:15 +0000 http://techliberation.com/?p=74128

Jack Schinasi discusses his recent working paper, Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion published in the Columbia Journal of Transnational Law. Schinasi takes an in-depth look at how online privacy laws differ across the world’s biggest Internet markets — specifically the United States, the European Union and China. Schinasi discusses how we exchange data for services and whether users are aware they’re making this exchange. And, if not, should intermediaries like Google be mandated to make its data tracking more apparent? Or should we better educate Internet users about data sharing and privacy? Schinasi also covers whether privacy laws currently in place in the US and EU are effective, what types of privacy concerns necessitate regulation in these markets, and whether we’ll see China take online privacy more seriously in the future.

Download

Related Links

]]>
https://techliberation.com/2014/01/21/schinasi/feed/ 0 74128
Thomas Rid on cyber war https://techliberation.com/2013/09/03/thomas-rid/ https://techliberation.com/2013/09/03/thomas-rid/#respond Tue, 03 Sep 2013 22:59:03 +0000 http://techliberation.com/?p=73525

Thomas Rid, author of the new book Cyber War Will Not Take Place discusses whether so-called “cyber war” is a legitimate threat or not. Since the early 1990s, talk of cyber war has caused undue panic and worry and, despite major differences, the military treats the protection of cyberspace much in the same way as protection of land or sea. Rid also covers whether a cyber attack should be considered an act of war; whether it’s correct to classify a cyber attack as “war” considering no violence takes place; how sabotage, espionage and subversion come into play; and offers a positive way to view cyber attacks — have such attacks actually saved millions of lives?

Download

Related Links

]]>
https://techliberation.com/2013/09/03/thomas-rid/feed/ 0 73525
The right role for government in cybersecurity https://techliberation.com/2013/08/01/the-right-role-for-government-in-cybersecurity/ https://techliberation.com/2013/08/01/the-right-role-for-government-in-cybersecurity/#comments Thu, 01 Aug 2013 12:03:50 +0000 http://techliberation.com/?p=45335

Today the Heartland Institute is publishing my policy brief, U.S. Cybersecurity Policy: P roblems and Principles, which examines the proper role of government in defending U.S. citizens, organizations and infrastructure from cyberattacks, that is, criminal theft, vandalism or outright death and destruction through the use of global interconnected computer networks.

The hype around the idea of cyberterrorism and cybercrime is fast reaching a point where any skepticism risks being shouted down as willful ignorance of the scope of the problem. So let’s begin by admitting that cybersecurity is a genuine existential challenge. Last year, in what is believed to be the most damaging cyberattack against U.S. interests to date, a large-scale hack of some 30,000 Saudi Arabia-based ARAMCO personal computers erased all data on their hard drives. A militant Islamic group called the Sword of Justice took credit, although U.S. Defense Department analysts believe the government of Iran provided support.

This year, the New York Times and Wall Street Journal have had computer systems hacked, allegedly by agents of the Chinese government looking for information on the newspapers’ China sources. In February, the loose-knit hacker group Anonymous claimed credit for a series of hacks of the Federal Reserve Bank, Bank of America, and American Express, targeting documents about salaries and corporate financial policies in an effort to embarrass the institutions. Meanwhile, organized crime rings are testing cybersecurity at banks, universities, government organizations and any other enterprise that maintains databases containing names, addresses, social security and credit card numbers of millions of Americans.

These and other reports, aided by popular entertainment that often depicts social breakdown in the face of massive cyberattack, have the White House and Congress scrambling to “do something.” This year alone has seen Congressional proposals such as Cyber Intelligence Sharing and Protection Act (CISPA), the Cybersecurity Act and a Presidential Executive Order all aimed at cybersecurity. Common to all three is a drastic increase the authority and control the federal government would have over the Internet and the information that resides in it should there be any vaguely defined attack on any vaguely defined critical U.S. information assets.

Yet we skeptics recently gained some ammo. McAfee, the security software manufacturer, recently revised its estimate of annual U.S. losses attribute to cybercrime downward to $100 billion, just one-tenth of the staggering $1 trillion it estimated in 2009. This is significant because both President Barack Obama and Gen. Keith Alexander, head of U.S. Cyber Command, have invoked the $1 trillion figure to justify greater government control of the Internet.

To be sure, $100 billion is hard to dismiss, but the figure is comparable to other types of losses U.S. businesses confront. For example, auto accidents result in annual losses between $99 billion and $168 billion. So while cybersecurity is a problem that needs to be addressed, we should be careful about the way we enlist the government to do so.

We should start by questioning the rush to create new laws that have vague definitions and poor measurables for success, yet give the government sweeping powers to collect private information from third parties. The NSA’s massive collection of phone and ISP data on millions of Americans—all done within the legal scope of the PATRIOT Act—should itself give pause to anyone who thinks it’s a good idea to expand the government’s access to information on citizens.

What’s more, vaguely-written law opens the door to prosecutorial abuse. My paper goes into more detail about how federal prosecutors used the Computer Fraud and Abuse Act to pile felony charges on Aaron Swartz, the renown young Internet entrepreneur and co-creator of the social news site Reddit, for what was an act of civil disobedience that entailed, at worst, physical trespassing and a sizable, but not far from damaging, violation of the terms of MIT’s JSTOR academic journal indexing service.

There may indeed be some debate over the legal and ethical scope of Swartz’s actions, but they were not aimed at profit or disruption. Yet the federal government decided to use a law designed to protect the public from sophisticated criminal organizations of thieves and fraudsters against a productive member of the Internet establishment, threatening him with 35 years in prison and loss of all rights to use computers and Internet for life. Swartz, who was plagued by depression, committed suicide before his case was adjudicated. Prosecutors exonerated him posthumously by dropping all charges, but controversy over the handling of the case continues to this day. (Also, a hat tip to Jerry Brito’s conversation with James Grimmelman on his Surprisingly Free podcast.)  .

Proper cybersecurity policy begins with understanding that there’s a limit to what government can do to prevent cybercrime or cyberattacks. Cybersecurity should not be seen as something disassociated with physical safety and security. And, for the most part, physical security is understood to entail personal responsibility. We lock our homes and garages, purchase alarm systems and similar services, and don’t leave valuables in plain sight. Businesses contract with private security companies to safeguard employees and property. Government law enforcement can be effective after the fact – investigating the crime and arresting and prosecuting the perpetrators – but police are not routinely deployed to protect private assets.

Similarly, it should not be the government’s job to protect private information assets. As with physical property, that responsibility falls to the property owner. Of course, we must recognize the government at all levels is an IT user and a custodian of its citizens’ data. As users with an interest in data protection, federal, state and local government information security managers deserve a place at the table—but as partners and stakeholders, not a dictators.

Since the first computers were networked, cybersecurity has best been managed through evolving best practices that involve communication across the user community. And yes, despite what the President and many members of Congress think, enterprises do share information about cyberattacks. For years they have managed to keep systems secure without turning vast quantities of personal data on clients and customers over the government absent due process or any judicial warrant.

In terms of lawmaking, cybercriminal law should be treated as an extension of physical criminal law. Theft, espionage, vandalism and sabotage were recognized as crimes long before computers were invented. The legislator’s job is first to determine how current law can apply to new methods used to carry off age-old capers, amending where necessary, as opposed to creating a new category of badly-written laws.

If any new laws are needed, they should be written to punish and deter acts that involve destruction and loss. The severity of the penalties must be consonant with the severity of the act. The law must come down hard on deliberate theft, destruction, or other clear criminal intent. Well-written law will ensure that prosecutorial resources are devoted to stopping organized groups of criminals who use email scams to drain the life savings of pensioners, not to relentlessly pursue a lone activist who, as an act of protest, downloaded and posted public-record local government documents that proved embarrassing to local elected officials.

Finally, my paper also addresses acts of cyberterrorism and cyberwar, which can exceed the reach of domestic law enforcement and involve nation-states or stateless organizations such as Al-Qaida. Combatting international cyberterrorism involves diplomacy and cooperation with allies—as well as rethinking the rules of engagement regarding response to an attack.

While it is wise to have appropriate defenses in place, before rushing to expand FISA courts or demand Internet “kill switches,” we need a calmer discussion of the likelihood of a devastating act of cyberterrorism, such as hacking into air traffic control or attacking the national power grid. Despite popular notions, attacks of this caliber cannot be carried out by a lone individual with a laptop and a public WiFi connection. An attacker would need considerable resources, the cooperation of a large number of insiders, and would have to rely on a number of factors outside his control. For more, I refer readers to a SANS Institute paper and a more recent article in Slate. Both discuss the logistics involved in a number of cyberterrorism scenarios. Suffice it to say, a terrorist can accomplish more with an inexpensive yet well-placed bomb than a time-consuming multi-stage hack that risks both failure and exposure.

The most important takeaway, however, is that today’s cybersecurity challenges can be met within a constitutional framework that respects liberty, privacy, property and legal due process. Author Eric Foner has written that since the nation’s founding, its most important organizing principle has been to maintain civil law and order within a structure of limited government powers and respect for individual rights. There is no reason this balance needs to be adjusted to favor state power at the expense of individual rights in combating computer crime or defending the nation’s information systems from foreign attack.

Related Articles:

Robert Samuelson Engages in a Bit of Argumentum in Cyber-Terrorem

CISPA’s vast overreach

Rise of the cyber-industrial complex

Do we need a special government program to help cybersecurity supply meet demand?

 

]]>
https://techliberation.com/2013/08/01/the-right-role-for-government-in-cybersecurity/feed/ 7 45335
Why the Lawsuit Challenging NSA Surveillance is Crucial to Internet Freedom https://techliberation.com/2013/07/16/why-the-lawsuit-challenging-nsa-surveillance-is-crucial-to-internet-freedom/ https://techliberation.com/2013/07/16/why-the-lawsuit-challenging-nsa-surveillance-is-crucial-to-internet-freedom/#comments Tue, 16 Jul 2013 22:15:30 +0000 http://techliberation.com/?p=45222

In June, The Guardian ran a groundbreaking story that divulged a top secret court order forcing Verizon to hand over to the National Security Agency (NSA) all of its subscribers’ telephony metadata—including the phone numbers of both parties to any call involving a person in the United States and the time and duration of each call—on a daily basis. Although media outlets have published several articles in recent years disclosing various aspects the NSA’s domestic surveillance, the leaked court order obtained by The Guardian revealed hard evidence that NSA snooping goes far beyond suspected terrorists and foreign intelligence agents—instead, the agency routinely and indiscriminately targets private information about all Americans who use a major U.S. phone company.

It was only a matter of time before the NSA’s surveillance program—which is purportedly authorized by Section 215 of the USA PATRIOT Act (50 U.S.C. § 1861)—faced a challenge in federal court. The Electronic Privacy Information Center fired the first salvo on July 8, when the group filed a petition urging the U.S. Supreme Court to issue a writ of mandamus nullifying the court orders authorizing the NSA to coerce customer data from phone companies. But as Tim Lee of The Washington Post pointed out in a recent essay, the nation’s highest Court has never before reviewed a decision of the Foreign Intelligence Surveillance Act (FISA) court, which is responsible for issuing the top secret court order authorizing the NSA’s surveillance program.130606-NSA-headquarters-tight-730a-590x400

Today, another crucial lawsuit challenging the NSA’s domestic surveillance program was brought by a diverse coalition of nineteen public interest groups, religious organizations, and other associations. The coalition, represented by the Electronic Frontier Foundation, includes TechFreedom, Human Rights Watch, Greenpeace, the Bill of Rights Defense Committee, among many other groups. The lawsuit, brought in the U.S. district court in northern California, argues that the NSA’s program—aptly described as the “Assocational Tracking Program” in the complaint—violates the First, Fourth, and Fifth Amendments to the Constitution, along with the Foreign Intelligence Surveillance Act.

In a statement today, TechFreedom President Berin Szoka described the lawsuit as follows:

We’re standing up for the constitutional rights of all Americans: The First Amendment protects our right to communicate and associate privately. The Fourth Amendment protects us against unreasonable searches and seizures by barring the kind of general warrant that compelled U.S. telephone carriers to turn over potentially sensitive information about Americans’ telephone call records. The secretive processes of the Foreign Intelligence Surveillance Court violate the most fundamental guarantees of the Fifth Amendment to due process, as well as basic principles of the rule of law.

Amen. Our founding fathers wrote the 4th Amendment to prevent precisely this kind of secretive sifting through citizens’ private records. As the recent scandal involving the IRS targeting tea party groups illustrates, America’s founders knew all too well that government would always be tempted to use perfectly innocuous information about Americans’ beliefs and behaviors to harass them and treat them unfairly. This is why our Constitution and federal laws restrict the government’s power to collect private information about its citizens. These rules exist not so criminals can conceal their behavior, but to protect you and me. And when the government violates those rules, it is acting criminally.

Think you’re off the the hook because you communicate primarily using the Internet, rather than via phone? Think again. We know that far more extensive collection of Americans’ data has occurred under the same authority—50 U.S.C. § 1861—upon which the Associational Tracking Program is based.

According to a leaked 2009 NSA Inspector General report, NSA in 2001 began collecting “bulk Internet metadata” from at least three unknown large Internet companies. A 2007 DOJ memo regarding “supplemental procedures” for NSA data collection authorized the agency to collect Internet metadata—including the “email address[es]” of each sender and recipient of an email, along with their “IP address”—for “persons in the United States.” The memo further states that “NSA has in its database a large amount of communications metadata associated with persons in the United States.” However, a spokesman for James Clapper, the Director of National Intelligence has claimed this Internet metadata collection program was “discontinued in 2011 for operational and resource reasons.” Who knows if this is accurate, or another “clearly erroneous” statement that will be corrected in future months or years in a statement resembling the letter James Clapper sent to the Senate Intelligence Committee a few weeks ago.

Yet if the NSA’s Associational Tracking Program is lawful, the Internet metadata program is probably legal as well. If courts fail to halt the NSA’s program as it currently exists, and clarify what Section 215 of the USA PATRIOT Act really means, nothing is stopping the government from resuming its acquisition of Internet metadata—that is, if it hasn’t already done so.

These suspicionless mass surveillance programs don’t just endanger our constitutional rights. They also threaten free enterprise in the information economy. Increasingly, we transact, communicate, innovate, and create in the digital realm, where information itself is a form of wealth. But if Americans reasonably perceive their digital communications—including metadata—are subject to warrantless governmental interception, some who might use cloud services will choose not to do so. Not only would this distort the future of Internet commerce, it might cause cloud computing servers and businesses to move or be formed abroad—which, ironically, could deny U.S. law enforcement access to this cloud data.

If the information age is to realize its full potential, providers of electronic communications services must be free to make credible assurances to their users about when private information will be shared, and with whom. Users need to know that the data they relinquish is confined to agreed-upon business, transactional, and record-keeping purposes—not automatically stored in a government datacenter.

]]>
https://techliberation.com/2013/07/16/why-the-lawsuit-challenging-nsa-surveillance-is-crucial-to-internet-freedom/feed/ 3 45222
Book Review: Ronald Deibert’s “Black Code: Inside the Battle for Cyberspace” https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/ https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/#comments Tue, 16 Jul 2013 13:01:57 +0000 http://techliberation.com/?p=45184

Black Code coverRonald J. Deibert is the director of The Citizen Lab at the University of Toronto’s Munk School of Global Affairs and the author of an important new book, Black Code: Inside the Battle for Cyberspace, an in-depth look at the growing insecurity of the Internet. Specifically, Deibert’s book is a meticulous examination of the “malicious threats that are growing from the inside out” and which “threaten to destroy the fragile ecosystem we have come to take for granted.” (p. 14) It is also a remarkably timely book in light of the recent revelations about NSA surveillance and how it is being facilitated with the assistance of various tech and telecom giants.

The clear and colloquial tone that Deibert employs in the text helps make arcane Internet security issues interesting and accessible. Indeed, some chapters of the book almost feel like they were pulled from the pages of techno-thriller, complete with villainous characters, unexpected plot twists, and shocking conclusions. “Cyber crime has become one of the world’s largest growth businesses,” Deibert notes (p. 144) and his chapters focus on many prominent recent examples, including cyber-crime syndicates like Koobface, government cyber-spying schemes like GhostNet, state-sanctioned sabotage like Stuxnet, and the vexing issue of zero-day exploit sales.

Deibert is uniquely qualified to narrate this tale not just because he is a gifted story-teller but also because he has had a front row seat in the unfolding play that we might refer to as “How Cyberspace Grew Less Secure.” Indeed, he and his colleagues at The Citizen Lab have occasionally been major players in this drama as they have researched and uncovered various online vulnerabilities affecting millions of people across the globe. (I have previously reviewed and showered praise on a couple important books that Deibert co-edited with scholars from The Citizen Lab and Harvard’s Berkman Center, including: Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace and Access Denied: The Practice and Policy of Global Internet Filtering. They are truly outstanding resources worthy of your attention.)

Black Code’s Many Meanings

So, what is “black code” and why should we be worried about it? Deibert uses the term as a metaphor for many closely related concerns. Most generally it includes “that which is hidden, obscured from the view of the average Internet user.” (p. 6) More concretely, it refers to “the criminal forces that are increasingly insinuating themselves into cyberspace, gradually subverting it from the inside out.” (p. 7) “Those who take advantage of the Internet’s vulnerabilities today are not just juvenile pranksters or frat house brats,” Deibert notes, “they are organized criminal groups, armed militants, and nation states.” (p. 7-8) Which leads to the final way Deibert uses the term “black code.” It also, he says, “refers to the growing influence of national security agencies, and the expanding network of contractors and companies with whom they work.” (p. 8)

Deibert is worried about the way these forces and factors are working together to undermine online stability and security, and even delegitimize liberal democracy itself. His thesis is probably most succinctly captured in this passage from Chapter 7:

We live in an era of unprecedented access to information, and many political parties campaign on platforms of transparency and openness. And yet, at the same time, we are gradually shifting the policing of cyberspace to a dark world largely free from public accountability and independent oversight. In entrusting more and more information to third parties, we are signing away legal protections that should be guaranteed by those who have our data. Perversely, in liberal democratic countries we are lowering the standards around basic rights to privacy just as the center of cyberspace gravity is shifting to less democratic parts of the world. (p. 130-1)

What Deibert is grappling with in this book is the same fundamental problem that has long plagued the Internet: How do you preserve the benefits associated with the most open and interconnected “network of networks” the world has ever known while also remedying the various vulnerabilities and pathologies created by that same openness and interconnectedness?  Deibert acknowledges this problem, noting:

Ever since the Internet emerged from the world of academia into the world of the rest of us, its growth trajectory has been shadowed by a grey economy that thrives on opportunities for enrichment made possible by an open, globally connected infrastructure. (p. 141)

The Paradox of the Net’s Open, Interconnected Nature

Again, paradoxically, this inherent instability and vulnerability is due precisely to the Net’s open and globally interconnected nature. And many governments are looking to exploit that fact. “These unfortunate by-products of an open, dynamic network are exacerbated by increasing assertions of state power,” Deibert notes. (p. 233)

More generally, this uncomfortable fact—that the Net’s open, interconnected nature leads to both enormous benefits as well as huge vulnerabilities—isn’t just true for criminal online activity or the cyber-espionage activities that various nation-states are pursuing today. It is equally true for everything online today. There is a sort of yin and the yang to the Net that is simply undeniable and completely unavoidable. For one issue after another we find that the Net’s greatest blessing—its open, interconnected nature—is also its greatest curse.

For example, as I noted here recently in my review of Abraham H. Foxman and Christopher Wolf ‘s new book, Viral Hate: Containing Its Spread on the Internet, the open and interconnected Internet gives us “the most widely accessible, unrestricted communications platform the world has ever known” but also  means we have to tolerate a great many imbeciles “who use it to spew insulting, vile, and hateful comments.” The same is true for other types of online speech and content: You have access to an abundance of informational riches, but there’s also no avoiding all the garbage out there now, too.

Similarly, as I noted in my essay, “Privacy as an Information Control Regime: The Challenges Ahead,” the open and interconnected Internet has given us historically unparalleled platforms for social interaction and commerce. But that same openness and interconnectedness has left us with a world of hyper-exposure and a variety of privacy and surveillance threats—not just from governments and large corporations, but also from each other.

And then there’s the never-ending story of digital copyright. On one hand, the open and globally interconnected network or networks has provided us with an amazing platform for sharing knowledge, art, and expression. On the other hand, as I noted in this essay on “The Twilight of Copyright,” creators of expressive works have less security than ever before in terms of how they can control and monetize their artistic and scientific inventions.

I could go on and on—as I did in my essays on “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges” and “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed”—but the moral of the story is pretty clear: The Internet giveth and the Internet taketh away. Openness and interconnectedness offer us enormous benefits but also force us to confront major risks as the price of admission to this wonderful network.

Will the Whole System Collapse?

The uncomfortable question that Deibert’s book tees up for discussion is: When will this balance get completely out of whack in terms of online security? Or, has it already? In some portions of the text, he hints that may already be the case. Consider this passage in Chapter 11 in which Deibert discusses whether the Chicken Little-ism of digital security worry-warts like Eugene Kaspersky and Richard Clarke is warranted:

Eugene Kaspersky, Richard Clarke, and others may sound like broken records or self-serving fear mongers, but there is no denying the evolving cyberspace ecosystem around us: we are building a digital edifice for the entire planet, and it sits above us like a house of cards. We are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some them open scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? (p. 186)

That sounds horrific, but is it really the case that the entire system really about to collapse? And, if so, what are we going to do about it?

This raises a small problem with Deibert’s book. He does such a nice job itemizing and describing these security vulnerabilities that by the time the reader wades through 230 pages and nears the end of the book, they are left in a highly demoralized state, searching for some hope and a concrete set of practical solutions. Unfortunately, they won’t find an abundance of either in Deibert’s brief closing chapter, “Toward Distributed Security and Stewardship in Cyberspace.”

Don’t get me wrong; I agree with the general thrust of Deibert’s framework, which I describe below. The problem is that it is highly aspirational in nature and lacks specifics. Perhaps that is simply because there are no easy answers here. Digital security is damn hard and, as with most other online pathologies out there, no silver-bullet solutions exist.

Deibert notes that some government officials will seek to exploit those vulnerabilities—many of which they created themselves—to expand their authority over the Internet. “Faced with mounting problems and pressures to do something, too many policy-makers are tempted by extreme solutions,” he notes. (p. 234) He worries about “a movement towards clamp down” that would be “antithetical to the principles of liberal democratic government” by undermining checks and balances and accountability. (p. 235) In turn, this will undermine the “mixed common-pool resource” that is the current Internet.

Deibert’s alternative cyber security strategy to counter the push to “clamp down” is based on three interrelated notions or components:

  1. Principles of restraint or “mutual restraint”: “Securing cyberspace requires a reinforcement, rather than a relaxation, of restraint on power, including checks and balances on governments, law enforcement, intelligence agencies, and on the private sector,” he argues. (p. 239)
  2. “Distributed security”: “The Internet functions precisely because of the absence of centralized control, because of thousands of loosely coordinated monitoring mechanisms,” Deibert notes. “While these decentralized mechanisms are not perfect and can occasionally fail, they form the basis of a coherent distributed security strategy. Bottom-up, ‘grassroots’ solutions to the Internet’s security problems are consistent with principles of openness, avoid heavy-handedness, and provide checks and balances against the concentrations of power,” he observes. (p. 240)
  3. “Stewardship” which Deibert defines as “an ethic of responsible behavior in regard to shared resources” and which, he argues, “would moderate the dangerously escalating exercise of state power in cyberspace by defining limits and setting thresholds of accountability and mutual restraint.” (p. 243)

Again, as an aspirational vision statement this all generally sounds fairly sensible, but the details are lacking. I think Deibert would have been wise to spend a bit more time developing this alternative “bottom-up” vision of how online security should work and bolstering it with case studies.

Digital Security without Top-Down Controls

Luckily, as my Mercatus Center colleague Eli Dourado noted in an important June 2012 white paper, distributed security and stewardship strategies are already working reasonably well today. Dourado’s paper, “Internet Security Without Law: How Service Providers Create Order Online,” documented the many informal institutions that enforce network security norms on the Internet and shows how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado noted.

For example, a diverse array of computer security incident response teams (CSIRTs) operates around the globe and share their research and coordinate their responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies, work with these CSIRTs and other individuals and organizations to address security vulnerabilities. A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments.

A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyberlaw centers (like Citizen Lab) and experts have also helped by creating projects like “Stop Badware,” which originated at Harvard University but then grew into a broader non-profit organization with diverse financial support.

Dourado continues on in his paper to show how these informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions, such as administrative regulation or punishing liability regimes.

Dourado’s description of the ideal approach to online security is entirely consistent with Deibert’s vision in Black Code. In fact, Deibert notes, “It is important to remind ourselves that in spite of the threats, cyberspace runs well and largely without persistent disruption. On a technical level, this efficiency is founded on open and distributed networks of local engineers who share information as peers,” he observes. (p. 240) That is exactly right, but I wish Deibert would have spent more time discussing how this system works in practice today and how it can be tweaked and improved to head off the heavy-handed and very costly top-down solutions that we both dread.

Toward Resiliency

But there’s one other thing I wish Deibert would have explored in the book: resiliency, or how we have adapted to various cyber-vulnerabilities over time.

For example, in another recent Mercatus Center study entitled “Beyond Cyber Doom: Cyber Attack Scenarios and the Evidence of History,” Sean Lawson, an assistant professor in the Department of Communication at the University of Utah, has stressed the importance of resiliency as it pertains to cybersecurity and concerns about “cyberwar.” “Research by historians of technology, military historians, and disaster sociologists has shown consistently that modern technological and social systems are more resilient than military and disaster planners often assume,” he writes. “Just as more resilient technological systems can better respond in the event of failure, so too are strong social systems better able to respond in the event of disaster of any type.”

More generally, as I noted in my recent law review article on “technopanics” and “threat inflation” in information technology policy debates:

while it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks. Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats.

What Professor Lawson and I are getting at in our respective articles is that the ability of organizations, institutions, and individuals to bounce back from adversity is a frequently unheralded feature of various systems and that it deserves more serious study. (See Andrew Zolli and Ann Marie Healy’s nice book, Resilience: Why Things Bounce Back, for more on this general topic). In the context of online security, what is most remarkable to me is not that the Internet suffers from vulnerabilities due to its open and interconnected nature; it’s that we don’t suffer far more damage as a result.

This gets us back to that very profound question that Deibert poses in Black Code: “Is it only a matter of time before the whole system collapses?” The better question, I think, is: why hasn’t the system already collapsed? Perhaps the answer is, because things haven’t gotten bad enough yet. But I believe that the more realistic answer is that: individuals and institutions often learn how to cope and become resilient in the face of adversity. This is partially the case online because of the stewardship and distributed, decentralized security we already see at work today that makes digital life tolerable.

But it has to be something more than that. After all, many of the security problems that Deibert describes in his book are quite serious and already affect millions of us today. How, then, are we getting by right now? Again, I think the answer has to be that adaptation and resiliency are at work on many different levels of online life.

Consider, for example, how we have learned to deal with spam, viruses, online porn, various online advertising and privacy concerns, and so on. Our adaptation to these threats and annoyances has not been perfectly smooth, of course. No doubt, some people would still like “something to be done” about these things. But isn’t it remarkable how we have, nonetheless, carried on with online commerce and interactive social life even as these problems have persisted?

Conclusion

Going forward, therefore, perhaps there are some reasons for hope. Perhaps the various generic strategies that Deibert outlines in his book, coupled with the remarkable ability of humans to roll with the punches and adapt, will help us come out of this just fine (or at least reasonably well).

Of course, it could also be the case that these security concerns just multiply and that the Internet then morphs into sometime quite different than the interconnected “network of networks” we know today. As I noted in my 2009 essay on “Internet Security Concerns, Online Anonymity, and Splinternets,” we might be moving toward a world with more separate dis­connected digital networks and online “gated communities.” This could take place spontaneously over time and be driven by corporations seeking to satisfy the demand of some consumers for safer and more secure online experiences. As I noted in my review of Jonathan Zittrain’s book, The Future of the Internet, I am actually fine with some of that. I think we can live in a hybrid world of “walled gardens” alongside of the “Wild West” open Internet, so long as this occurs in a spontaneous, organic, bottom-up fashion. [For a more extensive discussion, see my book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters.”]

If, however, this “splintering” of the Net is done from the top-down through intentional (or even incidental) government action, then it is far more problematic. We already see signs, for example, that Russia is pushing even more strongly in that direction in the wake of the NSA leaks. (See “N.S.A. Leaks Revive Push in Russia to Control Net,” New York Times, July 14.) The Russians have been using amorphous security concerns to push for greater Internet control for some time now. Of course, China has been there for years. So have many Middle Eastern countries. Of course, there’s no guarantee that their respective “splinternets” are, or would be, any more secure than today’s Internet, but it sure would make those networks far more susceptible to state control and surveillance. If that’s our future, then it certainly is a dismal one.

Anyway, read Ron Deibert’s Black Code for an interesting exploration of these and other issues. It’s an excellent contribution to field of Internet policy studies and a book that I’ll be recommending to others for many years to come.


Additional resources:

Other books you should read alongside “Black Code” (links are for my reviews of each book):

]]>
https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/feed/ 2 45184
Patrick Ruffini on the defeat of SOPA https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/ https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/#respond Tue, 02 Jul 2013 10:00:23 +0000 http://techliberation.com/?p=45095

Patrick Ruffini, political strategist, author, and President of Engage, a digital agency in Washington, DC, discusses his latest book with coauthors David Segal and David Moon: Hacking Politics: How Geeks, Progressives, the Tea Party, Gamers, Anarchists, and Suits Teamed Up to Defeat SOPA and Save the Internet. Ruffini covers the history behind SOPA, its implications for Internet freedom, the “Internet blackout” in January of 2012, and how the threat of SOPA united activists, technology companies, and the broader Internet community.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/07/02/patrick-ruffini-on-the-defeat-of-sopa/feed/ 0 45095
Robert Samuelson Engages in a Bit of Argumentum in Cyber-Terrorem https://techliberation.com/2013/07/01/robert-samuelson-engages-in-a-bit-of-argumentum-in-cyber-terrorem/ https://techliberation.com/2013/07/01/robert-samuelson-engages-in-a-bit-of-argumentum-in-cyber-terrorem/#comments Mon, 01 Jul 2013 14:44:00 +0000 http://techliberation.com/?p=45052

Washington Post columnist Robert J. Samuelson published an astonishing essay today entitled, “Beware the Internet and the Danger of Cyberattacks.” In the print edition of today’s Post, the essay actually carries a different title: “Is the Internet Worth It?” Samuelson’s answer is clear: It isn’t. He begins his breathless attack on the Internet by proclaiming:

If I could, I would repeal the Internet. It is the technological marvel of the age, but it is not — as most people imagine — a symbol of progress. Just the opposite. We would be better off without it. I grant its astonishing capabilities: the instant access to vast amounts of information, the pleasures of YouTube and iTunes, the convenience of GPS and much more. But the Internet’s benefits are relatively modest compared with previous transformative technologies, and it brings with it a terrifying danger: cyberwar.

And then, after walking through a couple of worst-case hypothetical scenarios, he concludes the piece by saying:

the Internet’s social impact is shallow. Imagine life without it. Would the loss of e-mail, Facebook or Wikipedia inflict fundamental change? Now imagine life without some earlier breakthroughs: electricity, cars, antibiotics. Life would be radically different. The Internet’s virtues are overstated, its vices understated. It’s a mixed blessing — and the mix may be moving against us.

What I found most troubling about this is that Samuelson has serious intellectual chops and usually sweats the details in his analysis of other issues. He understands economic and social trade-offs and usually does a nice job weighing the facts on the ground instead of engaging in the sort of shallow navel-gazing and anecdotal reasoning that many other weekly newspaper columnist engage in on a regular basis.

But that’s not what he does here. His essay comes across as a poorly researched, angry-old-man-shouting-at-the-sky sort of rant. There’s no serious cost-benefit analysis at work here; just the banal assertion that a new technology has created new vulnerabilities.  Really, that’s the extent of the logic at work here. Samuelson could have just as well substituted the automobile, airplanes, or any other modern technology for the Internet and drawn the same conclusion: It opens the door to new vulnerabilities (especially national security vulnerabilities) and, therefore, we would be better off without it in our lives.

Samuelson does admit that “Life would be radically different… without some earlier breakthroughs: electricity, cars, antibiotics,” so it is obvious he thinks their benefits outweigh their costs. But I could just as well say that new technologies such as cars and planes bring death and destruction, both in the theater of war and in everyday life. So, one might conclude of modern transportation technology that the “virtues are overstated, its vices understated. It’s a mixed blessing — and the mix may be moving against us,” just as Samuelson concludes of the Net.  Of course, such an assertion would be absurd without reference to the many benefits that accrue to us from these technologies. I don’t think I need to cite them all here. But Samuelson is certainly a sharp enough guy that he would engage in such a cost-benefit analysis if someone made such an assertion about other technologies.

When it comes to the Internet, however, all he can say about benefits is that “the instant access to vast amounts of information, the pleasures of YouTube and iTunes, the convenience of GPS and much more.” (GPS? Really? Strictly speaking, that’s not an Internet technology, Bob. But perhaps you have something against satellite technology, too! Looking forward to your column, “Is Satellite Communication Worth It?”)

Of course the first benefit of the Internet that Samuelson cites — “instant access to vast amounts of information” — is nothing to sneeze at! The fact that he so casually dismisses that benefit is rather troubling. For the vast majority of civilization, humans have lived in a what we might think of as a state of extreme information poverty. Today, by contrast, we are blessed to live in amazing times. An entire planet of ubiquitous, instantly accessible media and information is now at our fingertips. We are able to share culture and engage with others — both socially and commercially — in ways that were unthinkable and impossible even just a few decades ago.

It’s hard to quantify the benefits associated with these facts, but I would think most of us would agree they are enormous. But it’s hardly the only sort of benefit that comes from the Internet and modern digital communications technologies. The fact that Samuelson can’t think of anything more is either a serious failure of imagination or, more troubling, an intentional effort to minimize and ignore those benefits in order to prey on people’s worst fears.

I’ve spent a lot of time thinking about “technopanics” and the role that journalists sometimes play in hyping them. See, for example, my essay last summer, “Journalists, Technopanics & the Risk Response Continuum,” which is based on my Minnesota Journal of Law, Science & Technology law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” As I explain in that article, the model for what Samuelson has done in his essay is actually a very old logical fallacy: a so-called “appeal to fear.” Here’s how I explain it in my law review article:

Rhetoricians employ several closely related types of “appeals to fear.” Douglas Walton, author of Fundamentals of Critical Argumentation, outlines the argumentation scheme for “fear appeal arguments” as follows:
  • Fearful Situational Premise: Here is a situation that is fearful to you.
  • Conditional Premise: If you carry out A, then the negative consequences portrayed in the fearful situation will happen to you.
  • Conclusion: You should not carry out A.
This logic pattern here is referred to as argumentum in terrorem or argumentum ad metum. A closely related variant of this argumentation scheme is known as argumentum ad baculum, or an argument based on a threat. Argumentum ad baculum literally means “argument to the stick,” an appeal to force. Walton outlines the argumentum ad baculum argumentation scheme as follows:
  • Conditional Premise: If you do not bring about A, then consequence B will occur.
  • Commitment Premise: I commit myself to seeing to it that B comes about.
  • Conclusion: You should bring about A.
As will be shown, these argumentation devices are at work in many information technology policy debates today even though they are logical fallacies or based on outright myths. They tend to lead to unnecessary calls for anticipatory regulation of information or information technology.

I continue on in that article to provide several examples of how ” argumentum in cyber-terrorem ” logic is at work in several digital policy arenas today, especially as it pertains to cybersecurity and cyberwar fears. My Mercatus Center colleagues Jerry Brito and Tate Watkins have warned of the dangers of “threat inflation” in cybersecurity policy in their important paper, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy.” The rhetoric of cybersecurity debates illustrates how threat inflation is a crucial part of “argumentum in cyber-terrorem ” logic. Frequent allusions are made in cybersecurity debates to the potential for a “Digital Pearl Harbor,”  a “cyber cold war,”  a “cyber Katrina,”  or even a “cyber 9/11.”  These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” even though no one can be “bombed” with binary code. A rush to judgment often follows inflated threats.

And that’s exactly what Samuelson has done in his essay. He’s rushed to an illogical, sweeping conclusion — namely, that we would be better off just bottling up the Net, or “repealing” it (whatever that means) — and he hasn’t even bothered considering the costs of such action. Worse yet, even though he admits that, “I don’t know the odds of this technological Armageddon. I doubt anyone does. The fears may be wildly exaggerated,” that doesn’t stop him from suggesting that we should live in fear of worst case hypothetical scenarios and take radical steps based upon them.

Again, it is certainly true that the Internet creates new vulnerabilities, including national security vulnerabilities, but that simply cannot be the end of the story. Those vulnerabilities need to be carefully evaluated and measured and, before we rush to panicked conclusions and advocate sweeping policy solutions, the corresponding benefits of the Internet must be taken into consideration.

Instead, Samuelson has engaged in the worst sort of fear-based, factually-challenged reasoning in his essay. It’s a model for how not to think or write about Internet policy. A more thoughtful analysis would acknowledge that the Internet is more than just “a symbol of progress;” it constitutes real progress and an improvement of the human condition.  And while it’s all too easy for newspaper columnists to suggest “we would be better off without it” and that it should be “repealed,” there are all too many government goons out there who would like to do just that since the Net has empowered the masses and given them a voice like no other technology in history.

Shame on Robert Samuelson for dismissing these realities — and the Internet’s many benefits — so lightly.

 

[ Note: See all my essays on “technopanics” here.]

]]>
https://techliberation.com/2013/07/01/robert-samuelson-engages-in-a-bit-of-argumentum-in-cyber-terrorem/feed/ 1 45052
Declan McCullagh on the NSA leaks https://techliberation.com/2013/06/18/declan-mccullagh/ https://techliberation.com/2013/06/18/declan-mccullagh/#respond Tue, 18 Jun 2013 10:00:21 +0000 http://techliberation.com/?p=44980

Declan McCullagh, chief political correspondent for CNET and former Washington bureau chief for Wired News, discusses recent leaks of NSA surveillance programs. What do we know so far, and what more might be unveiled in the coming weeks? McCullagh covers legal challenges to the programs, the Patriot Act, the fourth amendment, email encryption, the media and public response, and broader implications for privacy and reform.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/06/18/declan-mccullagh/feed/ 0 44980
The Media’s Sound and Fury Over NSA Surveillance https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/ https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/#comments Mon, 10 Jun 2013 13:35:59 +0000 http://techliberation.com/?p=44926

***Cross-posted from Forbes.com***

It was, to paraphrase Yogi Berra, déjà vu all over again.  Fielding calls last week from journalists about reports the NSA had been engaged in massive and secret data mining of phone records and Internet traffic, I couldn’t help but wonder why anyone was surprised by the so-called revelations.

Not only had the surveillance been going on for years, the activity had been reported all along—at least outside the mainstream media.  The programs involved have been the subject of longstanding concern and vocal criticism by advocacy groups on both the right and the left.

For those of us who had been following the story for a decade, this was no “bombshell.”  No “leak” was required.  There was no need for an “expose” of what had long since been exposed.

As the Cato Institute’s Julian Sanchez and others reminded us, the NSA’s surveillance activities, and many of the details breathlessly reported last week, weren’t even secret.  They come up regularly in Congress, during hearings, for example, about renewal of the USA Patriot Act and the Foreign Intelligence Surveillance Act, the principal laws that govern the activity.

In those hearings, civil libertarians (Republicans and Democrats) show up to complain about the scope of the law and its secret enforcement, and are shot down as being soft on terrorism.  The laws are renewed and even extended, and the story goes back to sleep.

But for whatever reason, the mainstream media, like the corrupt Captain Renault in “Casablanca,” collectively found itself last week “shocked, shocked” to discover widespread, warrantless electronic surveillance by the U.S. government.  Surveillance they’ve known about for years.

Let me be clear.  As one of the long-standing critics of these programs, and especially their lack of oversight and transparency, I have no objection to renewed interest in the story, even if the drama with which it is being reported smells more than a little sensational with a healthy whiff of opportunism.

In a week in which the media did little to distinguish itself, for example, The Washington Post stood out, and not in a good way.  As Ed Bott detailed in a withering post for ZDNet on Saturday, the Post substantially revised its most incendiary article, a Thursday piece that originally claimed nine major technology companies had provided direct access to their servers as part of the Prism program.

That “scoop” generated more froth than the original “revelation” that Verizon had been complying with government demands for customer call records.

Except that the Post’s sole source for its claims turned out to a PowerPoint presentation of “dubious provenance.”  A day later, the editors had removed the most thrilling but unsubstantiated  revelations about Prism from the article.  Yet in an unfortunate and baffling Orwellian twist, the paper made absolutely no mention of the “correction.”   As Bott points out, that violated not only common journalistic practice but the paper’s own revision and correction policy.

All this and much more, however, would have been in the service of a good cause–if, that is, it led to an actual debate about electronic surveillance we’ve needed for over a decade.

Unfortunately, it won’t.  The mainstream media will move on to the next story soon enough, whether some natural or man-made disaster.

And outside the Fourth Estate, few people will care or even notice when the scandal dies.  However they feel this week, most Americans simply aren’t informed or bothered enough about wholesale electronic surveillance to force any real accountability, let alone reform.  Those who are up in arms today might ask themselves where they were for the last decade or so, and whether their righteous indignation now is anything more than just that.

As Politico’s James Hohmann noted on Saturday, “Government snooping gets civil libertarians from both parties exercised, but this week’s revelations are likely to elicit a collective yawn from voters if past polling is any sign.”

Why so pessimistic?  I looked over what I’ve written on this topic in the past, and found the following essay, written in 2008, which appeared in slightly different form in my 2009 book, “The Laws of Disruption.”   It puts the NSA’s programs in historical context, and tries to present both the costs and benefits of how they’ve been implemented.  It points out why at least some aspects of these government activities are likely illegal, and what should be done to rein them in.

What I describe is just as scandalous, if not moreso, than anything that came out last week.

Yet I present it below with the sad realization that if I were writing it today–five years later–I wouldn’t need to change a single word.  Except maybe the last sentence.  And then, just maybe.

Searching Bits, Seizing Information

U.S. citizens are protected from unreasonable search and seizure of their property by their government.  In the Constitution, that right is enshrined in the Fourth Amendment, which was enacted in response to warrantless searches by British agents in the run-up to the Revolutionary War. Over the past century, the Supreme Court has increasingly seen the Fourth Amendment as a source of protection for personal space—the right to a “zone of privacy” that governments can invade only with probable cause that evidence of a crime will be revealed.

Under U.S. law, Americans have little in the way of protection of their privacy from businesses or from each other. The Fourth Amendment is an exception, albeit one that applies only to government.

But digital life has introduced new and thorny problems for Fourth Amendment law. Since the early part of the twentieth century, courts have struggled to extend the “zone of privacy” to intangible interests—a right to privacy, in other words, in one’s information. But to “search” and “seize” implies real world actions. People and places can be searched; property can be seized.

Information, on the other hand, need not take physical form, and can be reproduced infinitely without damaging the original. Since copies of data may exist, however temporarily, on thousands of random computers, in what sense do netizens have “property” rights to their information? Does intercepting data constitute a search or a seizure or neither?

The law of electronic surveillance avoids these abstract questions by focusing instead on a suspect’s expectations. Courts reviewing challenged investigations ask simply if the suspect believed the information acquired by the government was private data and whether his expectation of privacy was reasonable.

It is not the actual search and seizure that the Fourth Amendment forbids, after all, but unreasonable search and seizure. So the legal analysis asks what, under the circumstances, is reasonable. If you are holding a loud conversation in a public place, it isn’t reasonable for you to expect privacy, and the police can take advantage of whatever information they overhear. Most people assume, on the other hand, that data files stored on the hard drive of a home computer are private and cannot be copied without a warrant.

One problem with the “reasonable expectation” test is that as technology changes, so do user expectations. The faster the Law of Disruption accelerates, the more difficult it is for courts to keep pace. Once private telephones became common, for example, the Supreme Court required law enforcement agencies to follow special procedures for the search and seizure of conversations—that is, for wiretaps. Congress passed the first wiretap law, known as Title III, in 1968. As information technology has revolutionized communications and as user expectations have evolved, the courts and Congress have been forced to revise Title III repeatedly to keep it up to date.

In 1986, the Electronic Communications Privacy Act amended Title III to include new protection for electronic communications, including e-mail and communications over cellular and other wireless technologies. A model of reasonable lawmaking, the ECPA ensured these new forms of communication were generally protected while closing a loophole for criminals who were using them to evade the police. (By 2005, 92 percent of wiretaps targeted cell phones.)

As telephone service providers multiplied and networks moved from analog to digital, a 1994 revision required carriers to build in special access for investigators to get around new features such as call forwarding. Once a Title III warrant is issued, law enforcement agents can now simply log in to the suspect’s network provider and receive real-time streams of network traffic.

Since 1968, Title III has maintained an uneasy truce between the rights of citizens to keep their communications private and the ability of law enforcement to maintain technological parity with criminals. As the digital age progresses, this balance is harder to maintain. With each cycle of Moore’s Law, criminals discover new ways to use digital technology to improve the efficiency and secrecy of their operations, including encryption, anonymous e-mail resenders, and private telephone networks. During the 2008 terrorist attacks in Mumbai, for example, co-conspirators used television reports of police activity to keep the gunmen at various sites informed, using Internet telephones that were hard to trace.

As criminals adopt new technologies, law enforcement agencies predictably call for new surveillance powers. China alone employs more than 30,000 “Internet police” to monitor online traffic, what is sometimes known as the “Great Firewall of China.” The government apparently intercepts all Chinese-bound text messages and scans them for restricted words including democracy, earthquake, and milk powder.

The words are removed from the messages, and a copy of the original along with identifying information is stored on the government’s system. When Canadian human rights activists recently hacked into Chinese government networks they discovered a cluster of message-logging computers that had recorded more than a million censored messages.

Netizens, increasingly fearful that the arms race between law enforcement and criminals will claim their privacy rights as unintended victims, are caught in the middle. Those fears became palpable after the September 11, 2001, terrorist attacks and those that followed in Indonesia, London, and Madrid.  The world is now engaged in a war with no measurable objectives for winning, fought against an anonymous and technologically savvy enemy who recruits, trains, and plans assaults largely through international communication networks. Security and surveillance of all varieties are now global priorities, eroding privacy interests significantly.

The emphasis on security over privacy is likely to be felt for decades to come. Some of the loss has already been felt in the real world. To protect ourselves from future attacks, everyone can now expect more invasive surveillance of their activities, whether through massive networks of closed-circuit TV cameras in large cities or increased screening of people and luggage during air travel.

The erosion of privacy is even more severe online. Intelligence is seen as the most effective weapon in a war against terrorists. With or without authorization, law enforcement agencies around the world have been monitoring large quantities of the world’s Internet data traffic. Title III has been extended to private networks and Internet phone companies, who must now insert government access points into their networks. (The FCC has proposed adding other providers of phone service, including universities and large corporations.)

Because of difficulties in isolating electronic communications associated with a single IP address, investigators now demand the complete traffic of large segments of addresses, that is, of many users. Data mining technology is applied after the fact to search the intercepted information for the relevant evidence.

Passed soon after 9/11, the USA Patriot Act went much further. The Patriot Act abandoned many of the hard-fought controls on electronic surveillance built into Title III. New “enhanced surveillance procedures” allow any judge to authorize electronic surveillance and lower the standard for warrants to seize voice mails.

The FBI was given the power to conduct wiretaps without warrants and to issue so-called national security letters to gag network operators from revealing their forced cooperation. Under a 2006 extension, FBI officials were given the power to issue NSLs that silenced the recipient forever, backed up with a penalty of up to five years in prison.

Gone is even a hint of the Supreme Court’s long-standing admonitions that search and seizure of information should be the investigatory tool of last resort.

Despite the relaxed rules, or perhaps inspired by them, the FBI acknowledged in 2007 that it had violated Title III and the Patriot Act repeatedly, illegally searching the telephone, Internet, and financial records of an unknown number of Americans. A Justice Department investigation found that from 2002 to 2005 the bureau had issued nearly 150,000 NSLs, a number the bureau had grossly under-reported to Congress.

Many of these letters violated even the relaxed requirements of the Patriot Act. The FBI habitually requested not only a suspect’s data but also those of people with whom he maintained regular contact—his “community of interest,” as the agency called it. “How could this happen?” FBI director Robert Mueller asked himself at the 2007 Senate hearings on the report. Mueller didn’t offer an answer.

Ultimately, a federal judge declared the FBI’s use of NSLs unconstitutional on free-speech grounds, a decision that is still on appeal. The National Security Agency, which gathers foreign intelligence, undertook an even more disturbing expansion of its electronic surveillance powers.

Since the Constitution applies only within the U.S., foreign intelligence agencies are not required to operate within the limits of Title III. Instead, their information- gathering practices are held to a much more relaxed standard specified in the Foreign Intelligence Surveillance Act. FISA allows warrantless wiretaps anytime that intercepted communications do not include a U.S. citizen and when the communications are not conducted through U.S. networks. (The latter restriction was removed in 2008.)

Even these minimal requirements proved too restrictive for the agency. Concerned that U.S. operatives were organizing terrorist attacks electronically with overseas collaborators, President Bush authorized the NSA to bypass FISA and conduct warrantless electronic surveillance at will as long as one of the parties to the information exchange was believed to be outside the United States.

Some of the president’s staunchest allies found the NSA’s plan, dubbed the Terrorist Surveillance Program, of dubious legality. Just before the program became public in 2005, senior officials in the Justice Department refused to reauthorize it.

In a bizarre real-world game of cloak-and-dagger, presidential aides, including future attorney general Alberto Gonzales, rushed to the hospital room of then-attorney general John Ashcroft, who was seriously ill, in hopes of getting him to overrule his staff. Justice Department officials got wind of the end run and managed to get to Ashcroft first. Ashcroft, who was barely able to speak from painkillers, sided with his staff.

Many top officials, including Ashcroft and FBI director Mueller, threatened to resign over the incident. President Bush agreed to stop bypassing the FISA procedure and seek a change in the law to allow the NSA more flexibility. Congress eventually granted his request.

The NSA’s machinations were both clumsy and dangerous. Still, I confess to having considerable sympathy for those trying to obtain actionable intelligence from online activity. Post-9/11 assessments revealed embarrassing holes in the technological capabilities of most intelligence agencies worldwide. (Admittedly, it also revealed repeated failures to act on intelligence that was already collected.) Initially at least, the public demanded tougher measures to avoid future attacks.

Keeping pace with international terror organizations and still following national laws, however, is increasingly difficult. For one thing, communications of all kinds are quickly migrating to the cheaper and more open architecture of the Internet. An unintended consequence of this change is that the nationalities of those involved in intercepted communications are increasingly difficult to determine.

E-mail addresses and instant-message IDs don’t tell you the citizenship or even the location of the sender or receiver. Even telephone numbers don’t necessarily reveal a physical location. Internet telephone services such as Skype give their customers U.S. phone numbers regardless of their actual location. Without knowing the nationality of a suspect, it is hard to know what rights she is entitled to.

The architecture of the Internet raises even more obstacles against effective surveillance. Traditional telephone calls take place over a dedicated circuit connecting the caller and the person being called, making wiretaps relatively easy to establish. Only the cooperation of the suspect’s local exchange is required.

The Internet, however, operates as a single global exchange. E-mails, voice, video, and data files—whatever is being sent is broken into small packets of data. Each packet follows its own path between connected computers, largely determined by data traffic patterns present at the time of the communication.

Data may travel around the world even if its destination is local, crossing dozens of national borders along the way. It is only on the receiving end that the packets are reassembled.

This design, the genius of the Internet, improves network efficiency. It also provides a significant advantage to anyone trying to hide his activities. On the other hand, NSLs and warrantless wiretapping on the scale apparently conducted by the NSA move us frighteningly close to the “general warrant” American colonists rejected in the Fourth Amendment. They were right to revolt over the unchecked power of an executive to do what it wants, whether in the name of orderly government, tax collection, or antiterrorism.

In trying to protect its citizens against future terror attacks, the secret operations of the U.S. government abandoned core principles of the Constitution. Even with the best intentions, governments that operate in secrecy and without judicial oversight quickly descend into totalitarianism. Only the intervention of corporate whistle-blowers, conscientious government officials, courts, and a free press brought the United States back from the brink of a different kind of terrorism.

Internet businesses may be entirely supportive of government efforts to improve the technology of policing. A society governed by laws is efficient, and efficiency is good for business. At the same time, no one is immune from the pressures of anxious customers who worry that the information they provide will be quietly delivered to whichever regulator asks for it. Secret surveillance raises the level of customer paranoia, leading rational businesses to avoid countries whose practices are not transparent.

Partly in response to the NSA program, companies and network operators are increasingly routing information flow around U.S. networks, fearing that even transient communications might be subject to large-scale collection and mining operations by law enforcement agencies. But aside from using private networks and storing data offshore, routing transmissions to avoid some locations is as hard to do as forcing them through a particular network or node.

The real guarantor of privacy in our digital lives may not be the rule of law. The Fourth Amendment and its counterparts work in the physical world, after all, because tangible property cannot be searched and seized in secret. Information, however, can be intercepted and copied without anyone knowing it. You may never know when or by whom your privacy has been invaded. That is what makes electronic surveillance more dangerous than traditional investigations, as the Supreme Court realized as early as 1967.

In the uneasy balance between the right to privacy and the needs of law enforcement, the scales are increasingly held by the Law of Disruption. More devices, more users, more computing power: the sheer volume of information and the rapid evolution of how it can be exchanged have created an ocean of data. Much of it can be captured, deciphered, and analyzed only with great (that is, expensive) effort. Moore’s Law lowers the costs to communicate, raising the costs for governments interested in the content of those communications.

The kind of electronic surveillance performed by the Chinese government is outrageous in its scope, but only the clumsiness of its technical implementation exposed it. Even if governments want to know everything that happens in our digital lives, and even if the law allows them or is currently powerless to stop them, there isn’t enough technology at their disposal to do it, or at least to do it secretly.

So far.

]]>
https://techliberation.com/2013/06/10/the-medias-sound-and-fury-over-nsa-surveillance/feed/ 1 44926
CISPA’s Vast Overreach https://techliberation.com/2013/04/17/cispas-vast-overreach/ https://techliberation.com/2013/04/17/cispas-vast-overreach/#comments Wed, 17 Apr 2013 14:30:06 +0000 http://techliberation.com/?p=44532

Last summer at an AEI-sponsored event on cybersecurity, NSA head General Keith Alexander made the case for information sharing legislation aimed at improving cybersecurity. His response to a question from Ellen Nakashima of the Washington Post (starting at 54:25 in the video at the link) was a pretty good articulation of how malware is identified and blocked using algorithmic signatures. In his longish answer, he made the pitch for access to key malware information for the purpose of producing real-time defenses.

What the antivirus world does is it maps that out and creates what’s called a signature. So let’s call that signature A. …. If signature A were to hit or try to get into the power grid, we need to know that signature A was trying to get into the power grid and came from IP address x, going to IP address y.

We don’t need to know what was in that email. We just need to know that it contained signature A, came from there, went to there, at this time.

[I]f we know it at network speed we can respond to it. And those are the authorities and rules and stuff that we’re working our way through.

[T]hat information sharing portion of the legislation is what the Internet service providers and those companies would be authorized to share back and forth with us at network speed. And it only says: signature A, IP address, IP address. So, that is far different than that email that was on it coming.

Now it’s intersting to note, I think—you know, I’m not a lawyer but you could see this—it’s interesting to note that a bad guy sent that attack in there. Now the issue is what about all the good people that are sending their information in there, are you reading all those. And the answer is we don’t need to see any of those. Only the ones that had the malware on it. Everything else — and only the fact that that malware was there — so you didn’t have to see any of the original emails. And only the ones that had the malware on it did you need to know that something was going on.

It might be interesting to get information about who sent malware, but General Alexander said he wanted to know attack signatures, originating IP address, and destination. That’s it.

Now take a look at what CISPA, the Cybersecurity Information Sharing and Protection Act (H.R. 624), allows companies to share with the government provided they can’t be proven to have acted in bad faith:

information directly pertaining to—

(i) a vulnerability of a system or network of a government or private entity or utility;

(ii) a threat to the integrity, confidentiality, or availability of a system or network of a government or private entity or utility or any information stored on, processed on, or transiting such a system or network;

(iii) efforts to deny access to or degrade, disrupt, or destroy a system or network of a government or private entity or utility; or

(iv) efforts to gain unauthorized access to a system or network of a government or private entity or utility, including to gain such unauthorized access for the purpose of exfiltrating information stored on, processed on, or transiting a system or network of a government or private entity or utility.

That’s an incredible variety of subjects. It can include vast swaths of data about Internet users, their communications, and the files they upload. In no sense is it limited to attack signatures and relevant IP addresses.

What is going on here? Why has General Alexander’s claim to need attack signatures and IP addresses resulted in legislation that authorizes wholesale information sharing and that immunizes companies who violate privacy in the process? One could only speculate. What we know is that CISPA is a vast overreach relative to the problem General Alexander articulated. The House is debating CISPA Wednesday and Thursday this week.

]]>
https://techliberation.com/2013/04/17/cispas-vast-overreach/feed/ 5 44532
Marc Hochstein on bitcoin https://techliberation.com/2013/04/16/marc-hochstein/ https://techliberation.com/2013/04/16/marc-hochstein/#respond Tue, 16 Apr 2013 10:00:45 +0000 http://techliberation.com/?p=44516 American Banker,  a leading media outlet covering the banking and financial services community, discusses bitcoin. ]]>

Marc Hochstein, Executive Editor of American Banker,  a leading media outlet covering the banking and financial services community, discusses bitcoin.

According to Hochstein, bitcoin has made its name as a digital currency, but the truly revolutionary aspect of the technology is its dual function as a payment system competing against companies like PayPal and Western Union. While bitcoin has been in the news for its soaring exchange rate lately, Hochstein says the actual price of bitcoin is really only relevant for speculators in the short-term; in the long-term, however, the anonymous, decentralized nature of bitcoin has far-reaching implications.

Hochstein goes on to talk about  the new market in bitcoin futures and some of bitcoin’s weaknesses—including the volatility of the bitcoin market.

Download

Related Links

]]>
https://techliberation.com/2013/04/16/marc-hochstein/feed/ 0 44516
Andy Greenberg on WikiLeaks and cypherpunks https://techliberation.com/2013/04/09/andy-greenberg/ https://techliberation.com/2013/04/09/andy-greenberg/#respond Tue, 09 Apr 2013 13:09:45 +0000 http://techliberation.com/?p=44471 This Machine Kills Secrets: How Wikileakers, Cypherpunks, and Hacktivists Aim to Free the World's Information, discusses the rise of the cypherpunk movement, how it led to Wikileaks, and what the future looks like for cryptography. ]]>

Andy Greenberg, technology writer for Forbes and author of the new book “This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World’s Information,” discusses the rise of the cypherpunk movement, how it led to WikiLeaks, and what the future looks like for cryptography.

Greenberg describes cypherpunks as radical techie libertarians who dreamt about using encryption to shift the balance of power from the government to individuals. He shares the rich history of the movement, contrasting one of t the movement’s founders—hardcore libertarian Tim May—with the movement’s hero—Phil Zimmerman, an applied cryptographer and developer of PGP (the first tool that allowed regular people to encrypt), a non-libertarian who was weary of cypherpunks, despite advocating crypto as a tool for combating the power of government.

According to Greenberg, the cypherpunk movement did not fade away, but rather grew into a larger hacker movement, citing the Tor network, bitcoin, and WikiLeaks as example’s of its continuing influence. Julian Assange, founder of WikiLeaks, belonged to a listserv followed by early cypherpunks, though he was not very active at the time, he says.

Greenberg is excited for the future of information leaks, suggesting that the more decentralized process becomes, the faster cryptography will evolve.

Download

Related Links

]]>
https://techliberation.com/2013/04/09/andy-greenberg/feed/ 0 44471
Regulating the Market for Zero-day Exploits: Look to the demand side https://techliberation.com/2013/03/15/regulating-the-market-for-zero-day-exploits-look-to-the-demand-side/ https://techliberation.com/2013/03/15/regulating-the-market-for-zero-day-exploits-look-to-the-demand-side/#respond Fri, 15 Mar 2013 20:04:34 +0000 http://techliberation.com/?p=44083

A market has developed in which specialized firms discover new vulnerabilities in software and sell that knowledge for tens or hundreds of thousands of dollars. These vulnerabilities are known as “zero day exploits” because there is no advance knowledge of them before they are used. In this blog post, we recognize that this market may require some kind of action, but reject simplistic calls for “regulation” of suppliers. We recommend focusing on the demand side of the market.

Although there is surprisingly little hard evidence of its scope and scale, the market for vulnerabilities is considered troublesome or dangerous by many. While the bounties paid may stimulate additional research into security, it is the exclusive and secret possession of this knowledge by a single buyer that raises concerns. It is clear that when a someone other than the software vendor pays $100,000 for a zero-day they are probably not paying for defense, but rather for an opportunity to take advantage of someone else’s vulnerability. Thus, the vulnerabilities remain unpatched. (Secrecy also makes the market rather inefficient; it may be possible to sell the same “secret” to several buyers.)

The supply side of the market consists of small firms and individuals with specialized knowledge. They compete to be the first to identify new vulnerabilities in software or information systems and then bring them to buyers. Many buyers are reputed to be government intelligence, law enforcement or military agencies using tax dollars to finance purchases. But we know less about the demand side than we should. The point, however, is that buyers are empowered to initiate an attack, a power that even legitimate organizations could easily abuse.

Insofar as the market for exploits shifts incentives away from publicizing and fixing vulnerabilities toward competitive efforts to gain private, exclusive knowledge of them so they can be held in reserve for possible use, the market has important implications for global security. It puts a premium on dangerous vulnerabilities, and thus may put the social and economic benefits of the Internet at risk. While the US might think it has an advantage in this competition, as a leader in the Internet economy and one of the most cyber-dependent countries, it also has the most to lose.

Unfortunately, so far the only policy response proposed has been vague calls for “regulation.” Chris Soghoian in particular has made “regulation” the basis of his response, calling suppliers “modern-day merchants of death” and claiming that “Security researchers should not be selling zero-days to middle man firms…These firms are cowboys and if we do nothing to stop them, they will drag the entire security industry into a world of pain.”

Such responses, however, are too long on moral outrage and too short on hard-headed analysis and practical proposals. The idea that “regulation” can solve the problem overlooks major constraints:

  • The market is transnational and thus regulation of supply would require agreement among contending nation-states. National security interests are implicated, making agreement among states difficult.
  • Disclosure and enforcement would be challenging. Unlike physical weapons systems, exploits are invisible and traded digitally. Buyers and sellers have strong incentives not to disclose deals. James Lewis of CSIS, who worked on a project to restrict access to or exports of software claims it “was impossible to control – there were so many ways to beat any restrictions, so many people who could write the code.”
  • The line between legitimate security services/research and the market for zero-day exploits is thin and blurry. Regulating exploit supply may translate into regulating all security software development, which would be costly and economically stifling;
  • It would be relatively easy for this type of market to go underground if regulation chafed. Governments could bring such R&D in-house instead of using an external market. Sales to terrorist or criminal groups are unlikely to be affected by any national or international system of regulation.

Despite these constraints, we do need to seriously consider ways to redirect incentives away from the discovery and possible exploitation of vulnerabilities towards discovering, publicizing and fixing them for the public benefit.

We suggest focusing policy responses on the demand side rather than the supply side. The zero-day market is largely a product of buyers, with sellers responding to that demand. And if it is true that much of the demand comes from the US Government itself, we should have a civilian agency such as DHS compile information about the scope and scale of our participation in the exploits market. We should also ask friendly nations to assess and quantify their own efforts as buyers, and share information about the scope of their purchases with us. If U.S. agencies and allies are key drivers of this market, we may have the leverage we need to bring the situation under control.

One idea that should be explored is a new federal program to purchase zero-day exploits at remunerative prices and then publicly disclose the vulnerabilities (using ‘responsible disclosure’ procedures that permit directly affected parties to patch them first). The program could systematically assess the nature and danger of the vulnerability and pay commensurate prices. It would need to be coupled with strong laws barring all government agencies – including military and intelligence agencies – from failing to disclose exploits with the potential to undermine the security of public infrastructure. If other, friendly governments joined the program, the costs could be shared along with the information.

In other words, instead of engaging in a futile effort to suppress the market, the US would attempt to create a near-monopsony that would pre-empt it and steer it toward beneficial ends. Funds for this purchase-to-disclose program could replace current funding for exploit purchases.

Obviously, terrorists, criminals or hostile states bent on destruction or break-ins would not be turned away from developing zero-days by the prospect of getting well-paid for their exploits. But most of the known supply side of the market does not seem to be composed of terrorists or criminals, but rather profit-motivated security specialists. And it’s likely that legitimate, well-paid talent will discover more flaws than “the dark side” in the long run.

Obviously the details regarding the design, procedures and oversight of this program would need to be developed. But on its face, a demand-side approach seems much more promising than railing against the morality of so-called cyber arms dealers.

]]>
https://techliberation.com/2013/03/15/regulating-the-market-for-zero-day-exploits-look-to-the-demand-side/feed/ 0 44083
Rise of the cyber-industrial complex https://techliberation.com/2013/03/08/rise-of-the-cyber-industrial-complex/ https://techliberation.com/2013/03/08/rise-of-the-cyber-industrial-complex/#comments Fri, 08 Mar 2013 19:54:18 +0000 http://techliberation.com/?p=44007

In our 2011 law review article, Tate Watkins and I warned: “[A] cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War. This complex may serve not only to supply cybersecurity solutions to the federal government, but to drum up demand for those solutions as well.”

In The Hill today, Kevin Bogardus writes under the headline “K St. ready for cybersercurity cash grab”:

The cybersecurity push has drummed up work for influence shops downtown. There have been more than a dozen lobbying registrations for clients that mention “cybersecurity” since Election Day, according to lobbying disclosure records.

Robert Efrus, a long-time Washington hand, is one of many lobbyists working the issue.

“It is a growing niche on K Street,” Efrus said. “I think there are a lot of new players that are seeing action with the executive order and legislation being on worked in Congress, not forgetting the funding opportunities. A lot of tech lobbyists have upped their involvement in cyber for sure.” …

“From a lobbying perspective, with everything else going south, this is one of the few positive developments in the whole federal policy arena,” said Efrus[.] …

Lobbyists note that cybersecurity is one of the few areas where budget-conscious lawmakers are looking to spend.

Cybersecurity is officially government’s growth sector.

]]>
https://techliberation.com/2013/03/08/rise-of-the-cyber-industrial-complex/feed/ 2 44007
How the NSA is helping companies fight Chinese hackers without any information sharing law https://techliberation.com/2013/03/08/how-the-nsa-is-helping-companies-fight-chinese-hackers-without-any-information-sharing-law/ https://techliberation.com/2013/03/08/how-the-nsa-is-helping-companies-fight-chinese-hackers-without-any-information-sharing-law/#comments Fri, 08 Mar 2013 15:56:43 +0000 http://techliberation.com/?p=43995

Marc Ambinder has some phenomenal reporting in Foreign Policy today about how the NSA assists companies that are the victims of (usually Chinese) cyberespionage. It is a must read.

One thing we learn: “Cyber-warfare directed against American companies is reducing the gross domestic product by as much as $100 billion per year, according to a recent National Intelligence Estimate.” That is just slightly more than half a percent of GDP, which puts the scope of the threat in perspective.

The most interesting thing, though, is this:

In the coming weeks, the NSA, working with a Department of Homeland Security joint task force and the FBI, will release to select American telecommunication companies a wealth of information about China’s cyber-espionage program, according to a U.S. intelligence official and two government consultants who work on cyber projects. Included: sophisticated tools that China uses, countermeasures developed by the NSA, and unique signature-detection software that previously had been used only to protect government networks.

Press reports have indicated that the Obama administration plans to give certain companies a list of domain names China is known to use for network exploitation. But the coming effort is of an entirely different scope. These are American state secrets.

Very little that China does escapes the notice of the NSA, and virtually every technique it uses has been tracked and reverse-engineered. For years, and in secret, the NSA has also used the cover of some American companies – with their permission – to poke and prod at the hackers, leading them to respond in ways that reveal patterns and allow the United States to figure out, or “attribute,” the precise origin of attacks. The NSA has even designed creative ways to allow subsequent attacks but prevent them from doing any damage. Watching these provoked exploits in real time lets the agency learn how China works.

Will you look at that? Information sharing between the government and the private sector without liability protection. Even more than information sharing, it seems some businesses are allowing the NSA to monitor their systems.

As I’ve said before, there is nothing preventing the government from sharing information about cyberattacks with the private sector. Legislation isn’t required to allow that. As for businesses sharing information with government, they too are free to do so. The only question is whether they should get a free pass for violating contracts or breaking the law when they share in the name of security. I think that would be a mistake.

As Ambiner points out, “the NSA’s reputation has been tarnished by its participation in warrantless surveillance[.]” People don’t trust the NSA with good reason. Security is important, but so are civil liberties. Removing the possibility of liability would also remove any incentive companies might have to be a check on what information the NSA collects. Ambinder writes that given their experience with the warrantless wiretapping program, today “telecoms are wary of cooperating with the NSA beyond the scope of the law.” That’s as it should be. Do we really want to give companies cover to cooperate with the NSA beyond the scope of the law?

According to Ambinder, the NIE suggests “that the NSA will have to perform deep packet inspection on private networks at some point.” (This is the so-called EINSTEIN 3 system This doesn’t sound like a good idea, but if it is to happen, it should be debated in public. Liability protection might allow businesses to allow the NSA to employ the system in secret.

]]>
https://techliberation.com/2013/03/08/how-the-nsa-is-helping-companies-fight-chinese-hackers-without-any-information-sharing-law/feed/ 2 43995
Do we need a special government program to help cybersecurity supply meet demand? https://techliberation.com/2013/02/26/do-we-need-a-special-government-program-to-help-cybersecurity-supply-meet-demand/ https://techliberation.com/2013/02/26/do-we-need-a-special-government-program-to-help-cybersecurity-supply-meet-demand/#comments Tue, 26 Feb 2013 14:28:30 +0000 http://techliberation.com/?p=43812

Today, the House Science Committee is holding a hearing on “Cyber R&D Challenges and Solutions.” Under consideration is a bill reintroduced by Rep. Mike McCaul that takes numerous steps purported to increase the network security workforce. The bill passed overwhelmingly last year.

I have no doubt that, as we move more of our lives online, we need to draw more people into computer security. But just as we need more network security professionals, we need more programmers, geneticists, biomedical engineers, statisticians, and countless other professions. We will also continue to need some number of doctors, lawyers, mechanics, plumbers, and grocery clerks. Does it make sense to introduce legislation to fine tune the number of practitioners of every trade?

Of course not. Which raises the question: what is so special about computer security? And the answer, I think, is “nothing is so special about computer security.” More people will get trained in computer security if the returns to doing so are higher, and fewer people will get trained in computer security if the returns to doing so are lower. Entry into the computer security business is simply a function of supply and demand.

The Washington Post reports, “The median salary for a graduate earning a degree in security was $55,000 in 2009, compared with $75,000 for computer engineering.” Is it any surprise, then, that more smart, tech-savvy students have pursued the latter route in recent years?

Intervening in a market that shows no signs of failing can have lots of unintended consequences. Most obviously, subsidies would run the serious risk of drawing too many workers into the computer security workforce. Those workers might find that they spent years investing in specialized skills without as much of a payoff as they expected. Tinkering could also affect the composition of people drawn into the field, with ill effect, for example by lowering the equilibrium salary and reducing the incentive for those with natural talent and without the need for training to work in security.

The bottom line is that a shortage of a particular kind of worker is a problem that solves itself. As salaries for security workers get bid up, more people will get training in security. The supply and demand dynamic is completely sufficient to get people into the correct professions in sufficient numbers.

The McCaul bill works through various subsidies and governmental reports to try to accomplish the same thing that the market would do if left to operate on its own. If the government wants to hire more computer security professionals, let them pay the money needed to draw people into this field. But let’s not jump through needless hoops to accomplish what should really be a straightforward task.

]]>
https://techliberation.com/2013/02/26/do-we-need-a-special-government-program-to-help-cybersecurity-supply-meet-demand/feed/ 1 43812
The cyberelephant in the room https://techliberation.com/2013/02/23/the-cyberelephant-in-the-room/ https://techliberation.com/2013/02/23/the-cyberelephant-in-the-room/#comments Sat, 23 Feb 2013 21:16:35 +0000 http://techliberation.com/?p=43799

Good question in The Economist from December of last year, before all the Mandiant madness:

As Mr Libicki asks, “what can we do back to a China that is stealing our data?” Espionage is carried out by both sides and is traditionally not regarded as an act of war. But the massive theft of data and the speed with which it can be exploited is something new. Responding with violence would be disproportionate, which leaves diplomacy and sanctions. But America and China have many other big items on their agenda, while trade is a very blunt instrument. It may be possible to identify products that China exports which compete only because of stolen data, but it would be hard and could risk a trade war that would damage both sides.

Given what China-U.S. relations are today, its not clear there are any good options. This situation reminds me of America’s early history of piracy. Until China is better integrated into the global order, the executive is going to have quite a challenge on his hands.

]]>
https://techliberation.com/2013/02/23/the-cyberelephant-in-the-room/feed/ 2 43799
With Obama cyber executive order, we don’t need new legislation https://techliberation.com/2013/02/22/with-obama-cyber-executive-order-we-dont-need-new-legislation/ https://techliberation.com/2013/02/22/with-obama-cyber-executive-order-we-dont-need-new-legislation/#comments Fri, 22 Feb 2013 15:27:39 +0000 http://techliberation.com/?p=43794

Politicians from both parties are now saying that although President Obama took comprehensive action on cybersecurity through executive order, we still need legislation. Over at TIME.com I write that no, we don’t.

Republicans want to protect businesses from suit for breach of contract or privacy statute violations in the name of information sharing, but there’s no good reason for such blanket immunity. Democrats would like to see mandated security standards, but top-down regulation is a bad idea, especially in such a fast-moving area. But as I write:

Yet guided by their worst impulses – to extend protections to business, or to exert bureaucratic control – members of Congress will insist that it is imperative they get in on the action.

If they do, they will undoubtedly be saddling us with a host of unintended consequences that we will come to regret later.

The executive order does most of what Congress failed to do in its last session. What Congress could add now is unnecessary and likely pernicious. The executive order should be given time to work. Only then will Congress now if and how it might need to be “strengthened.”

]]>
https://techliberation.com/2013/02/22/with-obama-cyber-executive-order-we-dont-need-new-legislation/feed/ 2 43794
Gabriella Coleman on the ethics of free software https://techliberation.com/2013/01/08/gabriella-coleman-2/ https://techliberation.com/2013/01/08/gabriella-coleman-2/#respond Tue, 08 Jan 2013 14:15:33 +0000 http://techliberation.com/?p=43410

Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy in the Art History and Communication Studies Department at McGill University, discusses her new book, “Coding Freedom: The Ethics and Aesthetics of Hacking,” which has been released under a Creative Commons license.

Coleman, whose background is in anthropology, shares the results of her cultural survey of free and open source software (F/OSS) developers, the majority of whom, she found, shared similar backgrounds and world views. Among these similarities were an early introduction to technology and a passion for civil liberties, specifically free speech.

Coleman explains the ethics behind hackers’ devotion to F/OSS, the social codes that guide its production, and the political struggles through which hackers question the scope and direction of copyright and patent law. She also discusses the tension between the overtly political free software movement and the “politically agnostic” open source movement, as well as what the future of the hacker movement may look like.

Download

Related Links

]]>
https://techliberation.com/2013/01/08/gabriella-coleman-2/feed/ 0 43410