Privacy, Security & Government Surveillance

Jane Yakowitz of Brooklyn Law School recently posted an interesting 63-page paper on SSRN entitled, “Tragedy of the Data Commons.” For those following the current privacy debates, it is must reading since it points out a simple truism: increased data privacy regulation could result in the diminution of many beneficial information flows.

Cutting against the grain of modern privacy scholarship, Yakowitz argues that “The stakes for data privacy have reached a new high water mark, but the consequences are not what they seem. We are at great risk not of privacy threats, but of information obstruction.” (p. 58)  Her concern is that “if taken to the extreme, data privacy can also make discourse anemic and shallow by removing from it relevant and readily attainable facts.” (p. 63)  In particular, she worries that “The bulk of privacy scholarship has had the deleterious effect of exacerbating public distrust in research data.”

Yakowitz is right to be concerned. Access to data and broad data sets that include anonymized profiles of individuals is profound importantly for countless sectors and professions: journalism, medicine, economics, law, criminology, political science, environmental sciences, and many, many others. Yakowitz does a brilliant job documenting the many “fruits of the data commons” by showing how “the benefits flowing from the data commons are indirect but bountiful.” (p. 5) This isn’t about those sectors making money. It’s more about how researchers in those fields use information to improve the world around us. In essence, more data = more knowledge. If we want to study and better understand the world around us, researchers need access to broad (and continuously refreshed) data sets. Overly restrictive privacy regulations or forms of liability could slow that flow, diminish or weaken research capabilities and output, and leave society less well off because of the resulting ignorance we face. Continue reading →

I guess the search for market failure in the privacy area is interesting to me. I wrote about it the other week too. It’s nice that those who prefer regulation feel obligated to justify that preference. It’s acknowledgment of the fact, increasingly well-accepted worldwide, that functioning free markets do a better job of discovering and satisfying consumers’ interests than any other method for organizing societies’ resources.

A recent market failure blog post called “Privacy and the Market for Lemons, or How Websites Are Like Used Cars,” seems to have piqued Adam’s interest. (See the comments.) In it, privacy and anonymity researcher Arvind Narayanan makes the case for privacy market failure. (Evidently, it’s an argument that others have made before.)

“In the realm of online privacy and data collection,” he says, “information asymmetry results from a serious lack of transparency around privacy policies. The website or service provider knows what happens to data that’s collected, but the user generally doesn’t.” Several economic, architectural, cognitive and regulatory limitations/flaws “have led to a well-documented market failure—there’s an arms race to use all means possible to entice users to give up more information, as well as to collect it passively through ever-more intrusive means.”

Alas, there’s no link at “well-documented.” I would like to see that documentation. But more importantly, what Narayanan appears to be speaking of as market failure—an arms race to get more information from Web users—is not one. That’s market action that Narayanan doesn’t like.

So where’s the market failure? Continue reading →

Today, the U.S. Senate Commerce Committee held a hearing on “The State of Online Consumer Privacy.”

The push for online privacy regulation has real momentum, as proposed privacy legislation from numerous lawmakers, a Department of Commerce report proposing a compulsory Do Not Track mechanism to regulate business marketing practices, and the Obama Administration’s proposed “Privacy Bill of Rights” all indicate.

However, Congress should be very wary of such proposals. A politically defined Do Not Track regime risks undermining targeted advertising, impeding business transactions that occur between strangers, and stifling mobile ecosystems that are barely out of the cradle. Rattling consumers needlessly by encouraging them to opt-out of largely beneficial information collection is an especially unwise idea in our uncertain economic climate – especially when major industry participants are developing such mechanisms on their own.

The opportunity to undermine online marketing – wrongly called “surveillance” – appeals to some, but such privacy purists have no right to call the shots for anyone but themselves and those who agree with them. The right to use information acquired through voluntary transactions is no less important than the right to decide whether to disclose information in the first place.

Continue reading →

National Journal reports that the Department of Commerce (NTIA) will, at a Senate Commerce Committee hearing today, call for a “consumer privacy bill of rights”—a euphemism for sweeping privacy regulation:

“Having carefully reviewed all stakeholder comments to the Green Paper, the department has concluded that the U.S. consumer data privacy framework will benefit from legislation to establish a clearer set of rules for the road for businesses and consumers, while preserving the innovation and free flow of information that are hallmarks of the Internet,” [NTIA Administrator Larry] Strickling said in his prepared testimony obtained by Tech Daily Dose.

In other words: “We’ve taken the time to think this through very carefully and have reluctantly come to the conclusion that regulation is necessary.” Sorry, but I’m just not buying it—not just the wisdom of the recommendation but the process that produced it. Let’s consider the timeline here:

  • October 27, 2010 – NTIA Administrator Strickling announces Green Paper is coming but says nothing about timing and little about substance
  • December 16, 2010 – NTIA/Commerce releases its Privacy Green Paper
  • January 28, 2011 – deadline for public comments (28 non-holiday business days later)
  • ??? – Commerce decides regulation is necessary
  • March 16, 2011 – Commerce is ready to ask Congress for legislation (31 non-holiday business days later)

The Commerce Department gave the many, many interested parties the  worst four weeks of the year—including  Christmas, New Year’s and Martin Luther King Day—to digest and comment on an 88 page, ~31,000 tome of a report on proposed regulation of how information flows in our… well, information economy. Oh, and did I mention that those same parties had already been given a deadline of January 31, 2011 to comment on the FTC’s 122 page, ~34,000 word privacy report back on December 1 (too bad for those celebrating Hanukkah)? In fairness, the FTC did, on January 21, extend its deadline to February 18—but that hardly excuses the Commerce Department’s rush to judgment. Continue reading →

You have to wade through a lot to reach the good news at the end of Time reporter Joel Stein’s article about “data mining”—or at least data collection and use—in the online world. There’s some fog right there: what he calls “data mining” is actually ordinary one-to-one correlation of bits of information, not mining historical data to generate patterns that are predictive of present-day behavior. (See my data mining paper with Jeff Jonas to learn more.) There is some data mining in and among the online advertising industry’s use of the data consumers emit online, of course.

Next, get over Stein’s introductory language about the “vast amount of data that’s being collected both online and off by companies in stealth.” That’s some kind of stealth if a reporter can write a thorough and informative article in Time magazine about it. Does the moon rise “in stealth” if you haven’t gone outside at night and looked at the sky? Perhaps so.

Now take a hard swallow as you read about Senator John Kerry’s (D-Mass.) plans for government regulation of the information economy.

Kerry is about to introduce a bill that would require companies to make sure all the stuff they know about you is secured from hackers and to let you inspect everything they have on you, correct any mistakes and opt out of being tracked. He is doing this because, he argues, “There’s no code of conduct. There’s no standard. There’s nothing that safeguards privacy and establishes rules of the road.”

Securing data from hackers and letting people correct mistakes in data about them are kind of equally opposite things. If you’re going to make data about people available to them, you’re going to create opportunities for other people—it won’t even take hacking skills, really—to impersonate them, gather private data, and scramble data sets. Continue reading →

What I hoped would be a short blog post to accompany the video from Geoff Manne and my appearances this week on PBS’s “Ideas in Action with Jim Glassman” turned out to be a very long article which I’ve published over at Forbes.com.

I apologize to Geoff for taking an innocent comment he made on the broadcast completely out of context, and to everyone else who chooses to read 2,000 words I’ve written in response.

So all I’ll say here is that Geoff Manne and I taped the program in January, as part of the launch of TechFreedom and of “The Next Digital Decade.”   Enjoy!

 

 

Few people have experienced just how oppressive “privacy” regulation can be quite so directly as Peter Fleischer, Google’s Global Privacy Counsel.  Early last year, Peter was convicted by an Italian court because Italian teenagers used Google Video to host a video they shot of bullying a an autistic kid—even though he didn’t know about the video until after Google took it down.

Of course, imposing criminal liability on corporate officers for failing to take down user-generated content is just a more extreme form of the more popular concept of holding online intermediaries liable for failing to take down content that is allegedly defamatory, bullying, invasive of a user’s privacy, etc.  Both have the same consequence: Given the incredible difficulty of evaluating such complaints, sites that host UGC will tend simply to take it down upon receiving complaints—thus being forced to censor their own users.

Now Peter has turned his withering analysis on the muddle that is Europe’s popular “Right to be Forgotten.” Adam noted the inherent conflict between that supposed “right” and our core values of free speech. It’s exactly the kind of thing UCLA Law Prof. Eugene Volokh had in mind when he asked what is your “right to privacy” but a right to stop me from observing you and speaking about you?” Peter hits the nail on the head:

More and more, privacy is being used to justify censorship. In a sense, privacy depends on keeping some things private, in other words, hidden, restricted, or deleted. And in a world where ever more content is coming online, and where ever more content is find-able and share-able, it’s also natural that the privacy counter-movement is gathering strength. Privacy is the new black in censorship fashions. It used to be that people would invoke libel or defamation to justify censorship about things that hurt their reputations. But invoking libel or defamation requires that the speech not be true. Privacy is far more elastic, because privacy claims can be made on speech that is true.

Continue reading →

Twitter curmudgeon @derekahunter writes: “With all the medical advances of last 100 years, why hasn’t anyone created a cough drop that doesn’t taste like crap?” Dammit, he’s right! Why hasn’t the market for cold remedies produced a tasty cough drop? Put differently, the market for cold remedies has failed to produce a tasty cough drop. The market has failed. Market . . . failure.

We have now established the appropriateness of a regulatory solution for the taste problem in the field of cold remedies. Have we not? There is a market failure.

No, we haven’t.

“Market failure” is not what happens when a given market has failed so far to reach outcomes that a smart person would prefer. It occurs when the rules, signals, and sanctions in and around a given marketplace would cause preference- and profit-maximizing actors to reach a sub-optimal outcome. You can’t show that there’s a market failure by arguing that the current state of the actual market is non-ideal. You have to show that the rules around that marketplace lead to non-ideal outcomes. The bad taste of cough drops is not evidence of market failure.

The failure of property rights to account for environmental values leads to market failure. A coal-fired electric plant might belch smoke into the air, giving everyone downwind a bad day and a shorter life. If the company and its customers don’t have to pay the costs of that, they’re going to over-produce and over-consume electricity at the expense of the electric plant’s downwind neighbors. The result is sub-optimal allocation of goods, with one set of actors living high on the hog and another unhappily coughing and wheezing.

Take an issue that’s closer to home for tech policy folk: People seem to underweight their privacy when they go online, promiscuously sharing anything and everything on Facebook, Twitter, and everyplace else. Marketers are hoovering up this data and using it to sell things to people. The data is at risk of being exposed to government snoops. People should be more attentive to privacy. They’re not thinking about long-term consequences. Isn’t this a market failure?

It’s not. It’s consumers’ preferences not matching up with the risks and concerns that people like me and my colleagues in the privacy community share. Consumers are preference-maximizing—but we don’t like their preferences! That is not a market failure. Our job is to educate people about the consequences of their online behavior, to change the public’s preferences. That’s a tough slog, but it’s the only way to get privacy in the context of maximizing consumer welfare.

If you still think there’s a market failure in this area—I readily admit that I’m on the far edge of my expertise with complex economic concepts like this—you haven’t finished making your case for regulation. You need to show that the rules, signals, and sanctions in and around the regulatory arena would produce a better outcome than the marketplace would. Be sure that you compare real market outcomes to real regulatory outcomes, not real market outcomes to ideal regulatory outcomes. Most arguments for privacy regulation simply fail to account for the behavior of the regulatory universe.

Adam has collected quotations on the subject of regulatory capture from many experts. I wrote a brief series of “real regulators” posts on the SEC and the Madoff scam a while back (1, 2, 3). And a recent article I’m fond of goes into the problem that many people think only consumers suffer, asking: “Are Regulators Rational?”

There’s no good-tasting cough drop because the set of drops that remedy coughing and the set of drops that taste good are mutually exclusive. Not because of market failure.

[UPDATE Feb. 2012: This little essay eventually led to an 80-page working paper, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.”]


In this essay, I will suggest that (1) while “moral panics” and “techno-panics” are nothing new, their cycles seem to be accelerating as new communications and information networks and platforms proliferate; (2) new panics often “crowd-out” or displace old ones; and (3) the current scare over online privacy and “tracking” is just the latest episode in this ongoing cycle.

What Counts as a “Techno-Panic”?

First, let’s step back and define our terms. Christopher Ferguson, a professor at Texas A&M’s Department of Behavioral, Applied Sciences and Criminal Justice, offers the following definition: “A moral panic occurs when a segment of society believes that the behavior or moral choices of others within that society poses a significant risk to the society as a whole.” By extension, a “techno-panic” is simply a moral panic that centers around societal fears about a specific contemporary technology (or technological activity) instead of merely the content flowing over that technology or medium. In her brilliant 2008 essay on “The MySpace Moral Panic,” Alice Marwick noted: Continue reading →

It seems peculiar to me that some of the same individuals and groups who so vociferously opposed a “broadcast flag” technological mandate in past years are now in a mad rush to have federal policymakers mandate a “Do Not Track” regulatory regime for privacy purposes. The broadcast flag debate, you will recall, centered around the wisdom of mandating a technological fix to the copyright arms race before digitized high-definition broadcast signals were effectively “Napster-ized.” At least that was the fear six or seven years ago. TV broadcasters and some content companies wanted the Federal Communications Commission (FCC) to recognize and enforce a string of code that would have been embedded in digital broadcast program signals such that mass redistribution of video programming could have been prevented.

Flash forward to the present debate about mandating a “Do Not Track” scheme to help protect privacy online. As I noted in my filing last week to the Federal Trade Commission, at root, Do Not Track is just another “information control regime.” Much like the broadcast flag proposal, it’s an attempt to use a technological quick-fix to solve a complex problem. When it comes to such information control efforts, however, there aren’t many good examples of simple fixes or silver-bullet solutions that have worked, at least not for very long. The debates over Wikileaks, online porn, Internet hate speech, and Spam all demonstrate how challenging it can be to put information back into the bottle once it is released into the digital wild.

To be clear, I am not opposed to technological solutions like broadcast flag or Do Not Track, but I am opposed to forcing them upon the Internet and digital markets in a top-down, centrally-planned fashion. While I am skeptical that either scheme would work well in practice (whether voluntary or mandated), my concern in these debates is that forcing such solutions by law will have many unintended consequences, not the least of which will be the gradual growth of invasive cyberspace controls in these or other contexts. After all, if we can have “broadcast flags” and “Do Not Track” schemes, why not “flag” mandates for objectionable speech or “Do Not Porn” browser mandates? Continue reading →