Joseph Hall on e-voting

by on October 30, 2012 · 0 comments

Elections are coming up, but though we’re well into the 21st century, we still can’t vote online. This archived episode discusses the future of voting.

Joseph Hall, Senior Staff Technologist at the Center for Democracy and Technology and a former postdoctoral researcher at the UC Berkeley School of Information, discusses e-voting. Hall explains the often muddled differences between electronic and internet voting, and talks about security concerns of each. He also talks about benefits and costs of different voting systems, limits to having meaningful recounts with digital voting systems, and why internet voting can be a bad idea.


Related Links

On the front page of today’s New York Times, Defense Secretary Leon Panetta again sounds the alarm about a "cyber Pearl Harbor.

“An aggressor nation or extremist group could use these kinds of cyber tools to gain control of critical switches,” Mr. Panetta said. “They could derail passenger trains, or even more dangerous, derail passenger trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country.”

Defense officials insisted that Mr. Panetta’s words were not hyperbole, and that he was responding to a recent wave of cyberattacks on large American financial institutions. He also cited an attack in August on the state oil company Saudi Aramco, which infected and made useless more than 30,000 computers.

Not hyperbole, hmm? It’s the usual cyber fear two-step. First lay out a doomsday scenario involving hackers remotely derailing trains full of lethal chemicals. Second, cite recent attacks as evidence that the threat is real. Except let’s look at the cited evidence.

Here’s how the New York Times itself described the recent attacks on banks:

The banks suffered denial of service attacks, in which hackers barrage a Web site with traffic until it is overwhelmed and shuts down. Such attacks, while a nuisance, are not technically sophisticated and do not affect a company’s computer network — or, in this case, funds or customer bank accounts. But they are enough to upset customers.

Explosive stuff. And what about that attack on Saudi Aramco? Serious, to be sure, even if no control systems were breached, but as Reuters recently reported,

One or more insiders with high-level access are suspected of assisting the hackers who damaged some 30,000 computers at Saudi Arabia’s national oil company last month, sources familiar with the company’s investigation say. …

The hackers’ apparent access to a mole, willing to take personal risk to help, is an extraordinary development in a country where open dissent is banned.

“It was someone who had inside knowledge and inside privileges within the company,” said a source familiar with the ongoing forensic examination.

What this shows is that one of the greatest threats to networks is not master hackers tunneling their way in, but good old fashioned spies. The cybersecurity legislation that Panetta and the administration are pushing cannot prevent a determined insider with access and permissions from carrying out an attack. It can, however, distort the incentives of businesses and hamper innovation.

Scott Shackelford, assistant professor of business law and ethics at Indiana University, and author of the soon-to-be-published book Managing Cyber Attacks in International Law, Business, and Relations: In Search of Cyber Peace, explains how polycentric governance could be the answer to modern cybersecurity concerns.

Shackelford  originally began researching collective action problems in physical commons, including Antarctica, the deep sea bed, and outer space, where he discovered the efficacy of polycentric governance in addressing these issues. Noting the similarities between these communally owned resources and the Internet, Shackelford was drawn to the idea of polycentric governance as a solution to the collective action problems he identified in the online realm, particularly when it came to cybersecurity.

Shackelford contrasts the bottom-up form of governance characterized by self-organization and networking regulations at multiple levels to the increasingly state-centric approach prevailing in forums like the International Telecommunication Union (ITU).  Analyzing the debate between Internet sovereignty and Internet freedom through the lens of polycentric regulation, Shackelford reconceptualizes both cybersecurity and the future of Internet governance.


Related Links

President Obama seems to be poised once again to use executive powers to get what Congress won’t give him.

In this case, it’s the imposition of a sweeping set of cybersecurity mandates and regulations on the private sector. My latest commentary at addresses the problems of the original Cybersecurity Act, which did not muster enough support in the Senate to get to a vote, and why a White House decision to implement it by executive order simply expands the government’s surveillance and datagathering power while doing little to secure the nation’s information infrastrucuture.

Find the commentary here.

Parmy Olson, London Bureau chief for Forbes, discusses her new book We are Anonymous: Inside the Hacker World of Lulzsec, Anonymous and the Global Cyber Insurgency. The book is an inside look at the people behind Anonymous, explaining the movement’s origins as a group of online pranksters, and how they developed into the best known hacktivist organization in the world. Olson discusses the tension that has existed between those that would rather just engage in pranks and those that want to use Annoymous to protest different groups they see as trying to clamp down on internet freedom, as well as some of the group’s most famous campaigns like the attacks against the Church of Scientology and the campaign against Paypal and Mastercard. Olson also describes the development of LulzSec which became famous for a series of attacks in 2011 on high profile websites including Fox, PBS, Sony, and the CIA.


By Ryan Radia and Berin Szoka

A new version of the Cybersecurity Act of 2012 was introduced last night (PDF), and a vote on the Senate floor reportedly may occur as early as next week. Although we’re still digesting the 211-page bill, its revised information sharing title stands out for its meaningful safeguards regarding what cybersecurity information may be shared by providers and its limits on how government may use shared information. Such prudence is of utmost importance in any bill that gives private entities blanket immunity from civil and criminal laws, including the common law, for activities such as cybersecurity information sharing.

By way of background, our organizations—the Competitive Enterprise Institute and TechFreedom— joined several other free market groups in sending a coalition letter to House leadership back in April regarding CISPA (which ultimately passed that chamber). While we support legislation streamlining federal laws to ensure cybersecurity information flows freely among private companies and, where appropriate, to and from the government, we urged important changes to CISPA to limit potential governmental abuses and meaningfully protect individuals’ private information. Unfortunately, most of our suggestions were not reflected in the final version of that bill.

We’re very glad to see that many of our free market principles are now reflected in Title VII of the Cybersecurity Act (the part of the bill that deals with information sharing). The bill’s sponsors adopted many significant, positive changes to Title VII to better protect privacy and individual liberties, including:

  • Allowing individuals harmed by governmental misuse of shared cyber threat information to sue the federal government for actual or statutory damages of $1000 (whichever is greater);
  • Proscribing all governmental use and sharing of cyber threat information for purposes unrelated to cybersecurity, except to avert imminent threats of death or serious bodily harm or sexual exploitation of minors;
  • Barring the federal government from conditioning the award of a federal grant, contract, or purchase on a private entity’s sharing of cybersecurity threat information (except in limited circumstances);
  • Immunizing only private entities that share cybersecurity threat information upon a reasonable and good faith belief that such sharing is authorized by the Title;
  • Providing for meaningful oversight of information sharing and use by the Privacy and Civil Liberties Oversight Board.

We also applaud Senators Franken, Durbin, Coons, Wyden, Blumenthal, and Sanders, whose efforts made these important revisions to the Cybersecurity Act possible. It’s not every day that CEI or TechFreedom praise members of Congress—or government in general!  We do so here because the changes to Title VII of the Cybersecurity Act will meaningfully reduce the likelihood that the bill, if enacted, will enable government to impermissibly access and abuse citizens’ private information. (For more on changes to the Cybersecurity Act, see this ACLU blog post by Michelle Richardson.)

To be sure, we still have serious concerns about Title VII of the bill — and even greater concerns about other provisions in the bill, especially those regulating cybersecurity of “critical infrastructure”. We’ll offer plenty of criticism about those provisions in coming days, but for now, seeing a few rays of light from Capitol Hill is enough to give us pause.

[Based on forthcoming article in the Minnesota Journal of Law, Science & Technology, Vol. 14 Issue 1, Winter 2013,]

I hope everyone caught these recent articles by two of my favorite journalists, Kashmir Hill (“Do We Overestimate The Internet’s Danger For Kids?”) and Larry Magid (“Putting Techno-Panics into Perspective.”) In these and other essays, Hill and Magid do a nice job discussing how society responds to new Internet risks while also explaining how those risks are often blown out of proportion to begin with.

Continue reading →

Eli Dourado, a research fellow at the Mercatus Center at George Mason University, discusses malware and possible ways to deal with it. Dourado notes several shortcomings of a government response including the fact that the people who create malware come from many different countries some of which would not be compliant with the US or other countries seeking to punish a malware author. Introducing indirect liability for ISPs whose users spread malware, as some suggest, is not necessary, according to Dourado. Service providers have already developed informal institutions on the Internet to deal with the problem. These real informal systems are more efficient than a hypothetical liability regime, Dourado argues.


Related Links

Tyler Cowen [asks on his blog today](

>By the way, didn’t it just [come out in]( *The Washington Post* that the United States helped attack Iran with Flame, Stuxnet and related programs? If they did this to us, wouldn’t we consider it an act of war? Didn’t we just take a major step toward militarizing the internet? Doesn’t it seem plausible to you that the cyber-assault is not yet over and thus we face immediate questions looking forward? Won’t somebody fairly soon try to do it to us? Won’t it encourage substitution into more dangerous biological weapons?

Those are good questions. Let’s take them in turn.

**If they did it to us, would we consider it an act of war?** I tend to agree with [Franz-Stefan Gady’s perspective]( that Stuxnet should not be considered an act of war. One of the most overlooked aspects of the great reporting done by the NYT and WaPo uncovering the details of Stuxnet is that the U.S. did not “hack in” to Iran’s nuclear facilities from thousands of miles away. Instead it [had to rely on Israel’s]( extensive intelligence apparatus to not only understand the target, but to deliver the worm as well. That is, humans had to physically infiltrate Iran’s operations to engage in the spying and then the sabotage.

Espionage [is not an act of war]( under international law. Nations expect and tolerate espionage as an inevitable political practice. Spies are sometimes prosecuted criminally when caught, sometimes traded for other spies, and often simply expelled from the country. Sabotage I’m less certain about, but I think it inhabits a similar space as espionage: frowned up, prosecuted criminally, but not an act of war *per se*. (I’ve been trying to find the answer to that question in vein, so if any international law experts would like to send me the answer, I’d appreciate it.)

So what do we have with Flame? It’s essentially spying, albeit in a frighteningly efficient manner. But, it’s not act of war. Stuxnet is similarly not an act of war if we assume sabotage is not. There’s little difference between Stuxnet and a spy infiltrating Natanz and throwing a wrench into the works. Stuxnet is just the wrench. Now, it’s key to point out what makes Stuxnet political sabotage and not terrorism, and that is that there were no deaths, much less civilian deaths.

**Did we take a big step in militarizing the Internet? Won’t somebody fairly soon try to do it to us?** Well, it’s already happening and it’s been happening for years. U.S. government networks are very often the subject of espionage–and maybe even sabotage–by foreign states. If something feels new about Stuxnet, it’s that for the first time we have definitive attribution to a state. As a result, the U.S. loses moral high ground when it comes to cybersecurity, and if someone doing it to the U.S. gets caught, they will be able to say, “You started it.” But they’re already doing it. Not that it’s necessarily a good thing, but the militarization of cyberspace is not just inevitable, it’s been [well underway]( for some time.

Finally, Tyler asks, **Won’t it encourage substitution into more dangerous biological weapons?** The answer to that, I think, is a definitive no. “Cyber weapons” arecompletely different from biological weapons and even chemical or conventional, and certainly nuclear. For one thing, they are [nowhere near]( [as dangerous]( No one has ever died from a cyber attack. Again, short of already being in a shooting war, these capabilities won’t be employed beyond espionage and surgical sabotage like Stuxnet.

That raises the question, however, if we’re in a shooting war with a Lybia or a Syria, say, will they resort to cyber? Perhaps, but as Thomas Rid has pointed out, the more destructive a “cyber weapon” the more [difficult and costly]( it is to employ. Massively so. This is why it’s probably only the U.S. at this point who has the capability to pull off an operation as difficult as Stuxnet, and then only with the assistance of Israel’s existing traditional intelligence operation. Neither al Qaeda, nor Anonymous, nor even Iran will be able to carry out an operation on the same level as Stuxnet any time soon.

So, Tyler, you can sleep well. For now at least. ;o) Yes, we should have a national discussion about what sorts of weapons we want our government employing, and what sort of authorization and oversight should be required, but we should not panic or think we’re a few keystrokes away from Armageddon. The more important question to me is, [why does one keeps $2.85 million in bitcoin?](

That is the title of my [new working paper](, out today from Mercatus. The abstract:

> Lichtman and Posner argue that legal immunity for Internet service providers (ISPs) is inefficient on standard law and economics grounds. They advocate indirect liability for ISPs for malware transmitted on their networks. While their argument accurately applies the conventional law and economics toolkit, it ignores the informal institutions that have arisen among ISPs to mitigate the harm caused by malware and botnets. These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms.

> In this paper, I document the informal institutions that enforce network security norms on the Internet. I discuss the enforcement mechanisms and monitoring tools that ISPs have at their disposal, as well as the fact that ISPs have borne significant costs to reduce malware, despite their lack of formal legal liability. I argue that these informal institutions perform much better than a regime of formal indirect liability. The paper concludes by discussing how the fact that legal polycentricity is more widespread than is often recognized should affect law and economics scholarship.

While I frame the paper as a reply to Lichtman and Posner, I think it also conveys information that is relevant to the debate over CISPA and related Internet security bills. Most politicians and commentators do not understand the extent to which Internet security is peer-produced, or why security institutions have developed in the way they have. I hope that my paper will lead to a greater appreciation of the role of bottom-up governance institutions on the Internet and beyond.

Comments on the paper are welcome!