Privacy, Security & Government Surveillance

TechFreedom, CEI and ATR’s DigitalLiberty.net just put out the following statement about ECPA reform, something Ryan and I have blogged about here and here. Also check out the larger coalition letter we released in April with seven other leading free market groups and digitalfourthamendment.org.

*  *  *

WASHINGTON D.C. – Sen. Patrick Leahy (D-Vt.) today introduced legislation (S. 1011) to reform the Electronic Communications Privacy Act (ECPA).  The law, enacted in 1986, was designed to protect individuals’ privacy by limiting governmental access to electronic data stored or sent using platforms or computers owned by third parties.

“Several lawmakers have proposed sweeping new regulation of how companies collect and use data to fund and improve the online content and services cherished by consumers,” said TechFreedom President Berin Szoka.  “The costs to consumers of such regulations could be enormous, yet the harms supposedly justifying new regulations remain largely amorphous.  Today, finally, we see a bill that focuses on the one clear harm that seems to underlie most online privacy concerns: law enforcement’s access to personal data without judicial scrutiny.  Addressing that very real problem should unite everyone who cares about privacy.”

Sen. Leahy’s proposed legislation would amend ECPA to protect Americans’ private information stored remotely or in the “cloud” from unwarranted search and seizure, and limit unwarranted governmental access to mobile location information.  The reforms would implement two of the four consensus principles advocated by the Digital Due Process coalition, a diverse coalition of public interest organizations, free market groups, high-tech companies, and scholars. Continue reading →

Sometimes free-marketeers are branded “free market fundamentalists” or something similar by their ideological opponents. The implication is that our preference for a society in which free people interact voluntarily to organize society’s resources is an irrational desire or a religion. I’m sure there’s a similar epithet we give to nanny staters—oh, there’s one, “nanny staters”—who we believe to have excessive faith in government solutions.

Market processes have decent theoretical explanations, such as Friedrich Hayek’s essay, “The Use of Knowledge in Society.” It’s not the easiest read, but lovers of the Internet, who see the genius of its decentralization, should see similar genius in markets as a method for discovering society’s wants and uniting to achieve them—without coercion.

From time to time, we also point out examples of how market processes work to deliver even intangible goods like privacy. So, for example, I noted market pressure against Facebook’s privacy-invasive “beacon” advertising system in 2007. Berin pointed out in 2008 that market forces caused Google to remove an oppressive clause from the Chrome end user license agreement. Google competitor Cuil made a run at the search behemoth based on privacy that year, something I noted briefly then (and Ryan and I discussed in the comments). I’ve also noted the failure of many to find true market failures.

As Cuil illustrates, not every privacy play works, but companies routinely pitch the public on the privacy merits of their products and the demerits of others’. It’s not a highly visible process, but it sometimes gets a little more visible when it fails. So thank you, Facebook, for a big #FAIL in the privacy competition area this week. You provide us a nice lesson in one of the ways markets work to meet consumer privacy demands.

You see, Facebook hired PR firm Burson-Marsteller to do a whisper campaign on the privacy demerits of a Google product called Social Circle. By pushing the story of privacy problems with a Google effort in the social networking space, Facebook hoped to thwart a competitor that it fears. Success would also be a success for privacy protection. If Google were doing something wrong, and Facebook were to make the case to the public, Google would lose face and it would lose business. Most importantly, a privacy-invasive product—as determined by public consensus—would recede. Markets often work by silently shunning products that don’t cut it. (Again, hard to see if you’re not looking for it, or if you’re committed to disbelieving it.)

Facebook appears not to have succeeded. Prickly privacy advocate Chris Soghoian outed the Burson-Marsteller campaign. Dan Lyons of the Daily Beast cornered Facebook into confessing its role in the attack on Google. And privacy commentator Kashmir Hill gives the privacy issues with Social Circle a “meh.”

When it happens differently, you get a change in a service like Social Circle—the way Facebook changed “beacon” and Google changed the Chrome EULA. These are anecdotes, and they reflect but one element of the market processes that shape products and services. But it’s something that “market denialists” should consider as they dig deep to explain to themselves and others how various mechanisms in our society work.

In a post at Techland yesterday I noted that the FCC and FEMA’s new “PLAN” text-based emergency alert system might do little good since new media seems to always beat government to get out critical information:

If history is any guide, however, you may not get any messages from 1600 Pennsylvania. Since the Emergency Alert System was created in 1963, it’s never been activated, despite hurricanes, earthquakes, tornadoes, the Cuban Missile Crisis, the Oklahoma City bombing, and 9/11. Why?

The chairman of the FCC during the 9/11 attacks, Michael Powell, says that “The explosion of 24-hour-a-day, 7-day-a-week media networks in some ways has proven to supplant those original conceptions of a senior leader’s need to talk to the people.”

Given that it was Twitter, and not the President’s address, that recently broke the killing of Osama Bin Laden, you have to wonder whether the new service will be just as swiftly supplanted.

Another thing occurred to me talking to a colleague today. The PLAN system relies on cell carriers’ ability to track your geographic location so that targeted warning messages can be sent to your phone depending on where it is you are at the moment. Also, as far as I can tell from the FCC’s fact sheet, you’re automatically signed up for the system when you buy a phone and you cannot opt-out of presidential messages. I wonder if we’ll see a congressional hearing on the use of geo data without consumer consent?

This morning, the Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law had a hearing entitled: “Protecting Mobile Privacy: Your Smartphones, Tablets, Cell Phones and Your Privacy.” It was a remarkably scattered affair, and I blogged three key—and very distinct—elements of it on the Cato@Liberty blog:

  • The Department of Justice used this “mobile privacy” hearing to call for increased surveillance of Internet and mobile phone users.
  • To escape a prosecutorial dead-end, Senator Blumenthal (D-CT) strongly suggested that he would outlaw the collection of radio signals. Where this government power would lead is quite profound.
  • Ignoring mobile privacy, Senator Schumer (D-NY) touted his hobby-horse, mobile app censorship.

Valid concerns with what mobile operating system providers Google and Apple have done with location information were somewhat lost in this disjointed and confused hearing.

I’m reading David Brin’s 1998 classic The Transparent Society and I’d like to share a passage that I found especially interesting in light of the recent Do-Not-Track bill introduced by Sen. Rockefeller.

On this blog, Adam Thierer has often written about the implicit quid pro quo between tracking and free online services. It seems to me that many folks find this an abstract concept. Here is Brinn writing in the late 90s about the possibility of an explicit quid pro quo:

An Economy of Micropayments? I cannot predict whether such an experiment would succeed, though using a “carrot”—or what chaos theorists call an “attractor state”—offers better prospects than the [IP owner’s] coalition’s present strategy of saber rattling and making hollow legal threats. In fact, the same approach might be used to deal with other aspects of “information ownership,” even down to the change of address you file with the post office. Perhaps someday advertisers and mail-order corporations will pay fair market value for each small use, either directly to each person listed or through royalty pools that assess users each time they access data on a given person. Or we might apply the concept of “trading-out”: getting free time at some favorite per-use site in exchange for letting the owners act as agents for our database records. It could be beneficial to have database companies competing with each other, bidding for the right to handle our credit dossiers, perhaps by offering us a little cash, or else by letting us trade our data for a little fun. Proponents of such a “micropayment economy” contend that the process will eventually become so automatic and computerized that it effectively fades into the background. People would hardly notice the dribble of royalties slipping into their accounts when others use “their” facts—any more than they would note the outflowing stream of cents they pay while skimming on the Web.

That is essentially what happened, except without all the transactions costs. It seems to me that all Do Not Track will do is introduce the transactions costs that we have so far avoided to the benefit of innovation. Who will this change benefit? The few people who are not willing to make the trade and who today have options to opt out. This leaves the majority of us who are willing to make the bargain in a very un-Coasean world.

On the podcast this week, Julian Sanchez, a research fellow at the Cato Institue who focuses on issues related to technology, privacy, and civil liberties, discusses electronic communications. Sanchez talks about changes in surveillance of electronic communications since 9/11, highlighting the large number of cases in which the FBI has gathered phone, internet, and banking information without judicial oversight. He then discusses the legal framework around electronic communications, which he says was built for a very different set of assumptions than we have today. Sanchez also gives a few recommendations for how to disentangle the convoluted legal standards related to electronic communications.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

There’s No Data Sheriff on the Wild Web,” is an article by Nick Bilton in the New York Times this weekend, pointing out that no federal law punishes the massive breaches of personal information like the recent Epsilon and Sony cases.

“There needs to be new legislation and new laws need to be adopted” to protect the public, said Senator Richard Blumenthal, Democrat of Connecticut, who has been pressing Sony to answer questions about its data breach and what the company did to avoid it. “Companies need to be held accountable and need to pay significantly when private and confidential information is imperiled.”

But how? Privacy experts say that Congress should pass legislation regulating companies if they collect certain types of information. If such laws existed today, they say, Sony could be held responsible for failing to properly protect the data by employing up-to-date security on its systems.

Or at the very least, companies would be forced to update their security systems. In underground online forums last week, hackers said Sony’s servers were severely outdated and infiltrating them was relatively easy.

While there may be no law requiring site operators to keep their networks updated and secure, it’s not as if they currently have no incentive to do so, and it’s not as if they are completely unaccountable. Witness the (at least) two lawsuits already filed against Sony. One in Canada for $1 billion and one in the U.S. looking for class action status. Not to mention that the PlayStation network is still down and losing money, as well as Sony’s reputation loss. Are you now more or less likely to buy a PlayStation as your next console?

To the extent we do need legislation, it’s not to tell firms to keep their Apache servers up to date. There are plenty of terrible things that happen to a firm if it doesn’t take the security of its customers’ data seriously. Sony is living proof of that. Adding a criminal fine to the pile likely won’t improve private incentives. What prescriptive legislation might to do, however, is put federal bureaucrats in charge of security standards, which is not a good thing in my book.

The missing incentive here might be the incentive to disclose that a breach has occurred. Rep. Mary Bono Mack has suggested that she might introduce legislation to require such disclosures. Such legislation may well be responding to a real and harmful information asymmetry. If a firm could preserve such an asymmetry, then the usual incentives wouldn’t work.

Rather than trying to legislatively predict and preempt security breaches, when it comes to the security of personal information it might be better to seek a policy of transparency and resiliency. As I explain in my latest TIME Techland piece, we may now be in a world were it’s next to impossible to ensure that at lease some of our private personal information that is digitized and connected to the net won’t be compromised. To attempt to put that genie back in the bottle might be not only futile, but counterproductive. Instead, we may be better served by being informed when our data is compromised, seeking civil redress, and learning to cope with the new reality. As I write in the piece:

On net, the fact that we now live in a hyper-connected world where information can’t be controlled is a good thing. The cultural, social, economic and political benefits of such a transparent system will likely outweigh the price we pay in privacy and security. And that’s especially the case if learn to live with that reality.

Human beings are incredibly resilient, and faced with a new environment, we adapt. When major changes take place—-from natural disasters to the Industrial Revolution—-we learn to live in the new context, but only if we acknowledge the new reality. We need to get used to this new world in which information can’t be controlled.

Maybe a new social norm will develop that accepts that everyone will have embarrassing facts about them online, and that it’s OK because we’re human. Maybe if we assumed that data breaches are inevitable, we wouldn’t give up on securing networks, but we might do more to cope. For example, the technology exists to make all credit card numbers single-use to a particular vendor, so they’re of little value to hackers.

Welcome to the new world. Information wants to be free. The Net interprets information control as damage and routes around it. Get used to it.

Reps. Edward Markey (D-Mass.) and Joe Barton (R-Texas) have released a discussion draft of their forthcoming “Do Not Track Kids Act of 2011.”  I’ve only had a chance to give it a quick read, but the bill, which is intended to help safeguard kids’ privacy online, has two major regulatory provisions of interest:

(1) New regulations aimed at limiting data collection about children and teens, including (a) expansion of the Children’s Online Privacy Protection Act (COPPA) of 1998, which would build upon COPPA’s “verifiable parental consent” model; and (b) a new “Digital Marketing Bill of Rights for Teens;” and (c) limits on collection of geolocation information about both children and teens.

(2) An Internet “Eraser Button” for Kids to help kids wipe out embarrassing facts they have place online but later come to regret.  Specifically, the bill would require online operators “to the extent technologically feasible, to implement mechanisms that permit users of the website, service, or application of the operator to erase or otherwise eliminate content that is publicly available through the website, service, or application and contains or displays personal information of children or minors.” This is loosely modeled on a similar idea currently being considered in the European Union, a so-called “right to be forgotten” online.

Both of these proposals were originally floated by the child safety group Common Sense Media (CSM) in a report released last December.  It’s understandable why some policymakers and child safety advocates like CSM would favor such steps. They fear that there is simply too much information about kids online today or that kids are voluntarily placing far too much personal information online that could come back to haunt them in the future. These are valid concerns, but there are both practical and principled reasons to be worried about the regulatory approach embodied in the Markey-Barton “Do Not Track Kids Act”: Continue reading →

A federal judge in Illinois has refused to allow a plaintiff to match IP addresses to individual names in a piracy case, indicating that use of IP addresses without any other evidence is too unreliable in identifying actual perpetrators, and as such, violates the rights of those caught in what he termed a “fishing expedition.”

In his decision, Judge Harold Baker pointed to one of several recent cases where paramilitary-type police raids on the residences of persons suspected of downloading child pornography that turned up nothing. What had happened was that real culprit had used that household’s unsecured wireless Internet connection.

Continue reading →

I’ve already Tweeted about it, but if you are following Internet privacy debates and have not yet had the chance to read Lauren Weinstein‘s new paper, “Do-Not-Track, Doctor Who, and a Constellation of Confusion,” it is definitely worth a look.  Weinstein, founder of the Privacy Forum, zeroes in on two related issue that I have made the focus of much of my work on this issue: (1) the fact that Do Not Track is seemingly viewed by some as a silver-bullet quick fix to online privacy concerns but will really be far more complicated in practice to enforce, and (2) that Do Not Track regulation will likely have many unintended consequences, most of which are going unexplored by proponents.

For example, Weinstein says:

Do-not-track in actuality encompasses an immensely heterogeneous mosaic of issues and considerations, not appropriately subject to simplistic approaches or “quick fix” solutions.   Approaching this area without a realistic appreciation of such facts is fraught with risks and the potential for major undesirable collateral damages to businesses, organizations, and individuals. Attempts to portray these controversies as “black or white” topics subject to rapid or in some cases even unilaterally imposed resolutions may be politically expedient, but are ultimately both childish and dangerous. […] Above all, we should endeavor to remember that tracking issues both on and off the Internet are in reality part of a complicated whole, a multifaceted  set of problems — and very importantly — potentials as well. The decisions that we make now regarding these issues will likely have far-ranging implications and effects on the Internet for many years to come, perhaps for decades.

Continue reading →