Cybersecurity

Over at TIME.com, [I write about](http://techland.time.com/2011/11/14/the-consequences-of-apples-walled-garden/) last week’s flap over Apple kicking out famed security researcher Charlie Miller out of its iOS developer program:

>So let’s be clear: Apple did not ban Miller for exposing a security flaw, as many have suggested. He was kicked out for violating his agreement with Apple to respect the rules around the App Store walled garden. And that gets to the heart of what’s really at stake here–the fact that so many dislike the strict control Apple exercises over its platform. …

>What we have to remember is that as strict as Apple may be, its approach is not just “not bad” for consumers, it’s creating more choice.

Read [the whole thing here](http://techland.time.com/2011/11/14/the-consequences-of-apples-walled-garden/).

In my ongoing work on technopanics, I’ve frequently noted how special interests create phantom fears and use “threat inflation” in an attempt to win attention and public contracts. In my next book, I have an entire chapter devoted to explaining how “fear sells” and I note how often companies and organizations incite fear to advance their own ends. Cybersecurity and child safety debates are littered with examples.

In their recent paper, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” my Mercatus Center colleagues Jerry Brito and Tate Watkins argued that “a cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War.” As Stefan Savage, a Professor in the Department of Computer Science and Engineering at the University of California, San Diego, told The Economist magazine, the cybersecurity industry sometimes plays “fast and loose” with the numbers because it has an interest in “telling people that the sky is falling.” In a similar vein, many child safety advocacy organizations use technopanics to pressure policymakers to fund initiatives they create. [Sometimes I can get a bit snarky about this.] Continue reading →

Mark Thompson has a new essay up over at Time on “Cyber War Worrywarts” in which he argues that in debates about cybersecurity, “the ratio of scaremongers to calm logic [is] currently about a 2-to-1 edge in favor of the Jules Verne crowd.”  He’s right.  In fact, I used my latest Forbes essay to document some of the panicky rhetoric and examples of “threat inflation” we currently see at work in debates over cybersecurity policy. “Threat inflation” refers to the artificial escalation of dangers or harms to society or the economy and doom-and-gloom rhetoric is certainly on the rise in this arena.

I begin my essay by noting how “It has become virtually impossible to read an article about cybersecurity policy, or sit through any congressional hearing on the issue, without hearing prophecies of doom about an impending “Digital Pearl Harbor,” a “cyber Katrina,” or even a “cyber 9/11.”” Meanwhile, Gen. Michael Hayden, who led the National Security Administration and Central Intelligence Agency under president George W. Bush, recently argued that a “digital Blackwater” may be needed to combat the threat of cyberterrorism.

These rhetorical claims are troubling to me for several reasons. I build on the concerns raised originally in an important Mercatus Center paper by my colleagues Jerry Brito and Tate Watkins, which warns of the dangers of threat inflation in policy debates and the corresponding rise of the “cybersecurity industrial complex.” In my Forbes essay, I note that: Continue reading →

In today’s Washington Post, Senators Lieberman, Collins and Carper [had an op-ed](http://www.washingtonpost.com/opinions/a-gold-standard-in-cyber-defense/2011/07/01/gIQAjsZk2H_story.html) calling for comprehensive cybersecurity legislation. If we don’t pass such legislation soon, they say, “The alternative could be a digital Pearl Harbor — and another day of infamy.”

[Last time I checked](http://techliberation.com/2011/05/02/langevin-panetta-is-cyberdoom-certified/), Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible. Yet as [I write in TIME.com’s Techland](http://techland.time.com/2011/07/08/is-cyberwar-real-or-just-hype/), war rhetoric allows government to pursue avenues that might otherwise be closed:

>The problem with the war metaphor is that treating a cyber attack as an act of war, rather than a crime, invites a different governmental response. In the offline world, vandalism, theft, and even international espionage are treated as crimes. When you detect them, you call law enforcement, who investigate and prosecute and, most importantly, do so while respecting your civil liberties. In war, these niceties can go out the window.

>War changes the options available to government, says noted security expert Bruce Schneier. Things you would never agree to in peacetime you agree to in wartime. Referring to the warrantless wiretapping of Americans that AT&T allowed the NSA to conduct after 9/11, Schneier has said, “In peacetime if the government goes to AT&T and says, ‘Hey, we want to eavesdrop on everybody,’ AT&T says, ‘Stop, where’s your warrant?’ In wartime, AT&T says, ‘Use that closet over there, lock the door, and just put a do not disturb sign on it.’”

Check out the [whole article](http://techland.time.com/2011/07/08/is-cyberwar-real-or-just-hype/) for more outrage.

Earlier this week, Adrian Chen wrote [a great exclusive](http://gawker.com/5805928/the-underground-website-where-you-can-buy-any-drug-imaginable) for Gawker about the online market for illicit drugs Silk Road. I strongly commend the piece to you. The site is only accessible via the [anonymizing router network TOR](http://en.wikipedia.org/wiki/Tor_(anonymity_network)), although it is [viewable using tor2web](https://ianxz6zefk72ulzz.tor2web.org/). Transactions are made using bitcoins, the virtual digital currency I’ve [previously](http://techland.time.com/2011/04/16/online-cash-bitcoin-could-challenge-governments/) [written](http://techliberation.com/2011/04/16/bitcoin-imagine-a-net-without-intermediaries/) about, and which I explain in a [new video for Reason.tv](http://www.youtube.com/watch?v=yYTqvYqXRbY&feature=youtu.be&t=16s) (below), also out this week.

After his piece was published, Chen added the following addendum:

>**Update:** Jeff Garzik, a member of the Bitcoin core development team, says in an email that bitcoin is not as anonymous as the denizens of Silk Road would like to believe. He explains that because all Bitcoin transactions are [recorded](http://en.wikipedia.org/wiki/Bitcoin#Transactions) in a public log, though the identities of all the parties are anonymous, law enforcement could use sophisticated network analysis techniques to parse the transaction flow and track down individual Bitcoin users.

>”Attempting major illicit transactions with bitcoin, given existing statistical analysis techniques deployed in the field by law enforcement, is pretty damned dumb,” he says.

I’ve been [asked](https://twitter.com/#!/elidourado/status/76088980852064257) by several folks about this: just how anonymous is bitcoin? My answer is that we don’t exactly know yet. Yes, all transactions are recorded in the public ledger that is the bitcoin network, but all that means is that you can see how many bitcoins were transferred from one account on the network to another account. This tells you nothing about the identity of the persons behind the accounts. Theoretically, you could identify just one person on the network and ask them (or coerce them) to identify the persons from whom they received payments, then go to those persons in turn and ask them who they accepted payment from, etc., until you’ve identified everyone, or just a person of interest. But you can imagine all the reasons this is impractical. More likely, a bitcoin user will be revealed through [identifying information inadvertently revealed](http://forum.bitcoin.org/index.php?topic=8.msg10621#msg10621) in the course of a transaction.

That all said, it seems that this week has also brought us a “natural experiment” that might settle the issue. LulzSec, the hacker group responsible for the [recent PBS hack](http://www.nytimes.com/2011/05/31/technology/31pbs.html), this week [announced](http://www.informationweek.com/news/security/attacks/229900111) that it has compromised the personal information of over a million Sony user accounts and has released a batch of 150,000. Here’s the thing: LulzSec is [accepting donations via Bitcoin](https://twitter.com/#!/LulzSec/status/76388576832651265) and [say they have received](https://twitter.com/#!/LulzSec/status/76667674947633152) over $100 so far. The group’s bitcoin receiving address is 176LRX4WRWD5LWDMbhr94ptb2MW9varCZP. Also, while in control of PBS.org, the group [offered vanity subdomains](https://twitter.com/#!/LulzSec/status/75159378801598464) (e.g. techliberation.pbs.org) for 2 BTC each.

So, here’s a high-profile group the FBI and Secret Service are no doubt itching to get their hands on. A bitcoin receiving address for them is public. I guess we’ll find out how anonymous it is.

Jim [posted earlier today](http://techliberation.com/2011/05/31/be-sure-to-attend-cfp/) about the [Computers, Freedom and Privacy](http://www.cfp.org/2011/wiki/index.php/Main_Page) conference June 14th to 16th, which I’m very much looking forward to attending. If you’re in town for that, though, I’d like to bring to your attention two other related conferences being put on by the Center for Infrastructure Protection and Homeland Security at George Mason University.

The first is the **The Tenth Workshop on Economics of Information Security**, the leading forum for interdisciplinary scholarship on information security, combining expertise from the fields of economics, social science, business, law, policy and computer science. Prior workshops have explored the role of incentives between attackers and defenders, identified market failures dogging Internet security, and assessed investments in cyber-defense. [It starts on June 13th and the program is here.](http://www.regonline.com/builder/site/tab1.aspx?EventID=960652)

More relevant to my interests is the **Workshop on Cybersecurity Incentives** to be held June 16th, and featuring a keynote by Bruce Schneier. [The program is here.](http://www.regonline.com/builder/site/tab2.aspx?EventID=959995) The workshop will look at how scholarship in law, economics and other fields within the behavioral sciences inform stakeholders about how markets, incentives and legal rules affect each other and shed light on determinations of liability and responsibility.

POLITICO reports that a bill aimed at combating so-called “rogue websites” will soon be introduced in the U.S. Senate by Sen. Patrick Leahy. The legislation, entitled the PROTECT IP Act, will substantially resemble COICA (PDF), a bill that was reported unanimously out of the Senate Judiciary Committee late last year but did not reach a floor vote. As more details about the new bill emerge, we’ll likely have much more to say about it here on TLF.

I discussed my concerns about and suggested changes to the COICA legislation here last November; the PROTECT IP Act reportedly contains several new provisions aimed at mitigating concerns about the statute’s breadth and procedural protections. However, as Mike Masnick points out on Techdirt, the new bill — unlike COICA — contains a private right of action, although that right may not permit rights holders to disable infringing domain names. Also unlike COICA, the PROTECT IP Act would apparently require search engines to cease linking to domain names that a court has deemed to be “dedicated to infringing activities.”

For a more in-depth look at this contentious and complex issue, check out the panel discussion that the Competitive Enterprise Institute and TechFreedom hosted last month. Our April 7 event explored the need for, and concerns about, legislative proposals to combat websites that facilitate and engage in unlawful counterfeiting and copyright infringement. The event was moderated by Juliana Gruenwald of National Journal. The panelists included me, Danny McPherson of VeriSign, Tom Sydnor of the Association for Competitive Technology, Dan Castro of the Information Technology & Innovation Foundation, David Sohn of the Center for Democracy & Technology, and Larry Downes of TechFreedom.

CEI-TechFreedom Event: What Should Lawmakers Do About Rogue Websites? from CEI Video on Vimeo.

“[There’s No Data Sheriff on the Wild Web](http://www.nytimes.com/2011/05/08/weekinreview/08bilton.html),” is an article by Nick Bilton in the *New York Times* this weekend, pointing out that no federal law punishes the massive breaches of personal information like the recent Epsilon and Sony cases.

>”There needs to be new legislation and new laws need to be adopted” to protect the public, said Senator Richard Blumenthal, Democrat of Connecticut, who has been pressing Sony to answer questions about its data breach and what the company did to avoid it. “Companies need to be held accountable and need to pay significantly when private and confidential information is imperiled.”

>But how? Privacy experts say that Congress should pass legislation regulating companies if they collect certain types of information. If such laws existed today, they say, Sony could be held responsible for failing to properly protect the data by employing up-to-date security on its systems.

>Or at the very least, companies would be forced to update their security systems. In underground online forums last week, hackers said Sony’s servers were severely outdated and infiltrating them was relatively easy.

While there may be no law requiring site operators to keep their networks updated and secure, it’s not as if they currently have no incentive to do so, and it’s not as if they are completely unaccountable. Witness the (at least) two lawsuits already filed against Sony. [One in Canada](http://ingame.msnbc.msn.com/_news/2011/05/03/6577819-sony-declines-to-testify-before-congress-as-1-billion-lawsuit-filed) for $1 billion and [one in the U.S.](http://ingame.msnbc.msn.com/_news/2011/04/27/6544610-sony-sued-could-bleed-billions-following-playstation-network-hack) looking for class action status. Not to mention that the PlayStation network is still down and losing money, as well as Sony’s reputation loss. Are you now more or less likely to buy a PlayStation as your next console?

To the extent we do need legislation, it’s not to tell firms to keep their Apache servers up to date. There are plenty of terrible things that happen to a firm if it doesn’t take the security of its customers’ data seriously. Sony is living proof of that. Adding a criminal fine to the pile likely won’t improve private incentives. What prescriptive legislation might to do, however, is put federal bureaucrats in charge of security standards, which is not a good thing in my book.

The missing incentive here might be the incentive to disclose that a breach has occurred. Rep. Mary Bono Mack [has suggested that she might introduce legislation](http://thehill.com/blogs/hillicon-valley/technology/159581-gop-rep-sony-playing-the-victim-in-hacker-attack) to require such disclosures. Such legislation may well be responding to a real and harmful information asymmetry. If a firm could preserve such an asymmetry, then the usual incentives wouldn’t work.

Rather than trying to legislatively predict and preempt security breaches, when it comes to the security of personal information it might be better to seek a policy of transparency and resiliency. As I explain in my [latest TIME Techland piece](http://techland.time.com/2011/05/08/why-your-personal-information-wants-to-be-free/), we may now be in a world were it’s next to impossible to ensure that at lease some of our private personal information that is digitized and connected to the net won’t be compromised. To attempt to put that genie back in the bottle might be not only futile, but counterproductive. Instead, we may be better served by being informed when our data is compromised, seeking civil redress, and learning to cope with the new reality. As I write in the piece:

>On net, the fact that we now live in a hyper-connected world where information can’t be controlled is a good thing. The cultural, social, economic and political benefits of such a transparent system will likely outweigh the price we pay in privacy and security. And that’s especially the case if learn to live with that reality.

>Human beings are incredibly resilient, and faced with a new environment, we adapt. When major changes take place—-from natural disasters to the Industrial Revolution—-we learn to live in the new context, but only if we acknowledge the new reality. We need to get used to this new world in which information can’t be controlled.

>Maybe a new social norm will develop that accepts that everyone will have embarrassing facts about them online, and that it’s OK because we’re human. Maybe if we assumed that data breaches are inevitable, we wouldn’t give up on securing networks, but we might do more to cope. For example, the technology exists to make all credit card numbers single-use to a particular vendor, so they’re of little value to hackers.

>Welcome to the new world. Information wants to be free. The Net interprets information control as damage and routes around it. Get used to it.

Here’s a doozy for the cyber-hype files. After it was announced that CIA Director Leon Panetta would take over at the Department of Defense, Rep. Jim Langevin, co-chair of the CSIS cybersecurity commission and author of comprehensive cybersecurity legislation, put out [a statement that read in part](http://thehill.com/blogs/hillicon-valley/technology/158383-house-dem-says-panetta-understands-cybersecurity):

>“I am particularly pleased to know that Director Panetta will have a full appreciation for the increasing sense of urgency with which we must approach cybersecurity issues. Earlier this year, Panetta warned that ‘the next Pearl Harbor could very well be a cyberattack.”

That’s from a [statement made](http://abcnews.go.com/News/cia-director-leon-panetta-warns-cyber-pearl-harbor/story?id=12888905) by Panetta to a house intelligence panel in February, and it’s an example of unfortunate rhetoric that Tate Watkins and I cite in [our new paper](http://mercatus.org/publication/loving-cyber-bomb-dangers-threat-inflation-cybersecurity-policy). Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible.

What’s especially unfortunate about that kind of alarmist rhetoric, apart from the fact that unduly scares citizens, is that it is often made in support of comprehensive cybersecurity legislation, like that introduced by Rep. Langevin. That bill [gives DHS the authority](http://www.govtrack.us/congress/billtext.xpd?bill=h112-1136&version=ih&nid=t0%3Aih%3A386) to issue standards for, and audit for compliance, private owners of critical infrastructure.

What qualifies as critical infrastructure? The bill has an expansive definition, so let’s hope that the “computer experts” cited in [this National Journal story](http://www.nextgov.com/nextgov/ng_20110429_3808.php) on the Sony PlayStation breach are not the ones doing the interpreting:

>While gaming and music networks may not be considered “critical infrastructure,” the data that perpetrators accessed could be used to infiltrate other systems that are critical to people’s financial security, according to some computer experts. Stolen passwords or profile information, especially codes that customers have used to register on other websites, can provide hackers with the tools needed to crack into corporate servers or open bank accounts.

It’s not hard to imagine a logic that leads everything to be considered “critical infrastructure” because, you know, everything’s connected on the network. We need to be very careful about legislating great power stemming from vague definitions and doing so on little evidence and lots of fear.

Wired reports that a recent federal court decision would make it possible for a private-sector employee to be found in violation of the the Computer Fraud and Abuse Act for simply violating their employer’s data policies, without any real “hacking” having occurred. This not only applies to data access, like grabbing data via a non-password-protected computer, but also to unauthorized use, such as emailing or copying data the employee might otherwise have permission to access.

On face, this doesn’t seem entirely unreasonable. Breaking and entering is a crime, but so is casually walking into a business or home and taking things that aren’t yours, so it seems like data theft, even without any “hacking,” should be a crime. For the law to be otherwise would create a “but he didn’t log out” defense for would-be data thieves.

But what about unauthorized use? Is there a physical property equivalent of this? Could I be criminally liable for using the corporate car to drag race my against my neighbor, or would I only be fired and potentially sued in civil court? Does this new interpretation CFAA simply expand the scope of this law into realms already covered, perhaps more appropriately, by statutes that specifically address trade secrets or other sensitive information in a broader way that doesn’t involve computing technology?

Judge Tena Campbell noted in the dissent that under the ruling, “any person who obtains information from any computer connected to the internet, in violation of her employer’s computer-use restrictions, is guilty of a federal crime.” So, perhaps this is a case of the court overreaching in an incredibly dramatic fashion.

I hope my lawyerly co-bloggers can weigh-in on this issue.

HT: Ryan Lynch