Privacy, Security & Government Surveillance

If you haven’t seen Edward Hasbrouck’s talk on government surveillance of travel IT systems, you should.

It’s startling to learn just how much access people other than your airline have to your air travel plans.

Here’s just one image that Hasbrouck put together to illustrate what the system looks like.

He’ll be presenting his travel surveillance talk at the Cato Institute at noon on April 2nd. We’ll also be discussing the new public notice on airport strip-search machines issued by the TSA earlier this week.

Register now for Travel Surveillance, Traveler Intrusion.

travel surveillance

Today Reason has published my policy paper addressing privacy concerns created by search, social networking and Web-based e-commerce in general.

These web sites have been in regulatory crosshairs for some time, although Congress and the Federal Trade Commission have been hesitant to push forward with restrictive legislation such as “Do Not Track” and mandatory opt-in or top-down mandates such as the White House drafted “Privacy Bill of Rights.” An the U.S. seems unwilling to go to the lengths Europe is, contemplating such unworkable rules like demanding an “Internet eraser button”—a sort of online memory hole that would scrub any information about you that is accessible on the Web, even if it is part of the public record.

In my paper, It’s Not Personal: The Dangers of Misapplied Policies to Search, Social Media and Other Web Content, I discuss the difficulty of regulating personal disclosure because different people have different thresholds for privacy. We all know people who refuse to go on Facebook because they are wary of allowing too much information about themselves to circulate. Where it gets dicey is when authority figures take a paternalistic attitude and start deciding what information I will not be allowed to share, for what they claim is my own good.

Top down mandates really don’t work, mainly because popular attitudes are always in flux. Offer me 50 percent off on a hotel room, and I may be willing to tell you where I’m vacationing. Find me interesting books and movies, and I may be happy to let you know my favorite titles.

Instead, ground-up guidelines that arise as users become more comfortable with the medium, and sites work to establish trust, work better. True, Google and Facebook often push the envelope in trying to determine where user boundaries are, but pull back when run into user protest. And when the FTC took up Google’s and Facebook’s practices, while the agency shook a metaphorical finger at both companies’ aggressiveness, it assessed no fines or penalties, essentially finding that no consumer harm was done.

This course has been wise. The willingness of users to exchange information about themselves in return for value is an important element of e-commerce. It is worth considering some likely consequences if the government pushes too hard to prevent sites from gathering information about users.

Continue reading →

Last week on his personal blog, Peter Fleischer, Global Privacy Counsel for Google, posted an interesting essay entitled “We Need a Better, Simpler Narrative of US Privacy Laws.” Fleischer says that Europe has done a better job marketing its privacy regime to the world than the United States and argues that “The US has to figure out how to explain its privacy laws on the global stage” since “Europe is convincing many countries around the world to implement privacy laws that follow the European model.” He notes that “in the last year alone, a dozen countries in Latin America and Asia have adopted euro-style privacy laws [while] not a single country, anywhere, has followed the US model.” Fleischer argues that this has ramifications for long-term trade policy and global Internet regulation more generally.

I found this essay very interesting because I deal with some of these issues in my latest law review article, “The Pursuit of Privacy in a World Where Information Control is Failing” (Harvard Journal of Law & Public Policy, vol. 36, no. 2, Spring 2013). In the article, I suggest that the U.S. does have a unique privacy regime and it is one that is very similar in character to the regime that governs online child safety issues. Whether we are talking about online safety or digital privacy, the defining characteristics of the U.S. regime are that it is bottom-up, evolutionary, education-based, empowerment-focused, and resiliency-centered. It focuses on responding to safety and privacy harms after exhausting other alternatives, including market responses and the evolution of societal norms.

The EU regime, by contrast, is more top-down in character and takes a more static, inflexible view of privacy rights. It tries to impose a one-size-fits-all model on a diverse citizenry and it attempts to do so through heavy-handed data directives and ongoing “agency threats.” It is a regime that makes more sweeping pronouncements about rights and harms and generally recommends a “precautionary principle” approach to technological change in which digital innovation is more “permissioned.”

Put simply, the U.S. regime is reactive in character while the E.U. regime is more preemptive.  The U.S. system focuses on responding to safety and privacy problems using a more diverse toolbox of solutions, some of which are governmental in character while others are based on evolving social and market norms and responses. To be clear, law does enter the picture here in the U.S., but it does so in a very different way than it does in the E.U.   Continue reading →

HJLPP coverI’m excited to announce the release of my latest law review article, “The Pursuit of Privacy in a World Where Information Control is Failing,” which appears in the next edition (vol. 36) of the Harvard Journal of Law & Public Policy. This is the first of two complimentary law review articles that I will be releasing this year dealing with privacy policy. The second, which will be published later this summer by the George Mason University Law Review, is entitled, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.” (FYI: Both articles focus on privacy claims made against private actors — namely, efforts to limit private data collection — and not on privacy rights against governments.)

The new Harvard Journal article is divided into three major sections. Part I focuses on some of normative challenges we face when discussing privacy and argues that there may never be a widely accepted, coherent legal standard for privacy rights or harms here in the United States. It also explores the tensions between expanded privacy regulation and online free speech. Part II turns to the many enforcement challenges that are often ignored when privacy policies are being proposed or formulated and argues that legislative and regulatory efforts aimed at protecting privacy must now be seen as an increasingly intractable information control problem. Most of the problems policymakers and average individuals face when it comes to controlling the flow of private information online are similar to the challenges they face when trying to control the free flow of digitalized bits in other information policy contexts, such as online safety, cybersecurity, and digital copyright.

If the effectiveness of law and regulation is limited by the normative considerations discussed in Part I and the practical enforcement complications discussed in Part II, what alternatives remain to assist privacy-sensitive individuals? I address that question in Part III of the paper and argue that the approach America has adopted to deal with concerns about objectionable online speech and child safety offers a path forward on the privacy front as well. Continue reading →

Geordi

Yesterday I explained why I’m not too worried about Silicon Valley’s penchant for “solutionism,” which Evgeny Morozov tackles in his new book. Essentially I think that as long as we make decisions about which technologies to adopt via market processes, people will reject those applications that are stupid or bad. Today I want to explore one reason why I’m optimistic that, in the long run, the public will get the technology it wants, despite the perennial squeamishness of some intellectuals.

The problem some thinkers and pundits have with my sanguine let-a-thousand-flowers-bloom approach is that inevitably the public will embrace some technologies that the thinkers don’t like. The result is usually a lot of fretting and hand-wringing by public intellectuals about what the scary new technology will do to our brains or society. Eventually, activists take on the cause and try to use state power to limit the choices the rest of us can make—for our own good, rest assured.

Today it seems that the next technology to get this treatment will be life-logging and personal data mining, as I discussed in my last post. Squarely in the crosshairs right now is Google Glass.

In this CNN op-ed about Glass Andrew Keen waits only seven words before using the adjective “creepy”—the watchword of nervous nellies everywhere. His concern is that those wearing Google Glass will be spying on anyone in their line of sight. Mark Hurst expresses similar concerns in a widely circulated blog post that also frets about what happens when we’re all not just recording but also being recorded.

This time around, though, I think the worrywarts face an uphill battle. That’s because in the case of life-logging and personal data mining, the “creepy” parts of the technologies are one in the same with the technologies themselves. The “creepiness” is not a bug, it’s the feature, and it can’t be severed without destroying the technology.

Via a Twitter post this morning, privacy lawyer Stephen Kline (@steph3n) brings to my attention this new California bill that “would require the privacy policy [of a commercial Web site or online service] to be no more than 100 words, be written in clear and concise language, be written at no greater than an 8th grade reading level, and to include a statement indicating whether the personally identifiable information may be sold or shared with others, and if so, how and with whom the information may be shared.”

I’ve always been interested in efforts — both on the online safety and digital privacy fronts — to push for “simplified” disclosure policies and empowerment tools. Generally speaking, increased notice and simplified transparency in these and others contexts is a good norm that companies should be following. However, as I point out in a forthcoming law review article in the Harvard Journal of Law & Public Policy, we need to ask ourselves whether the highly litigious nature of America’s legal culture will allow for truly “simplified” privacy policies. As I note in the article, by its very nature, “simplification” likely entails less specificity about the legal duties and obligations of either party. Consequently, some companies will rightly fear that a move toward more simplified privacy policies could open them up to greater legal liability. If policymakers persist in the effort to force the simplification of privacy policies, therefore, they may need to extend some sort of safe harbor provision to site operators for a clearly worded privacy policy that is later subject to litigation because of its lack of specificity. If not, site operators will find themselves in a “damned if you do, damned if you don’t” position: Satisfying regulators’ desire for simplicity will open them up to attacks by those eager to exploit the lack of specificity inherent in a simplified privacy policy.

Another issue to consider comes down to simple bureaucratic sloth: Continue reading →

Obama’s talked a big game about online privacy. He promised reform during the 2008 campaign. A year ago, the White House proposed a “Privacy Bill of Rights.” But so far, the Administration’s delivered little more than fine words. Worse, they’ve focused on the wrong problems.

Government has an important role to play in protecting consumer privacy, but its snooping and surveillance are far bigger problems—which have only grown worse. While Washington talks of a new commercial privacy “Bill of Rights,” the real Bill of Rights is in peril.

The American Revolution erupted, in large part, out of seething resentment at British privacy intrusions—without judicial supervision. Virginia adopted its own Bill of Rights shortly before the Declaration of Independence, including what later became Madison’s Fourth Amendment to the Constitution: “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” Law enforcement must generally obtain a warrant before conducting a search—which means convincing a judge that probable cause exists to believe a crime has been committed. Continue reading →

Attendees at the State of the ‘Net conference will be thrilled to know that Larry Downes will be making an encore performance Wednesday afternoon, January 23rd, in the Rayburn House Office Building. The noontime briefing is entitled “A Rational Response to the Privacy ‘Crisis’.” It’s appropriately named because he’ll be discussing ideas from his recent Cato policy analysis: “A Rational Response to the Privacy ‘Crisis’.”

Here’s a thought experiment. Let’s say you believe the Internet economy needs more regulation to guard against potential privacy violations or what you regard as excessive data aggregation. Further, you believe that no amount of self-regulation, social norms, market pressure, education, empowerment, or anything else could possibly substitute for regulation. I know there are a lot of people out there today who feel this way. Regardless of the merits of such claims, here’s my question for you: Do the ends (enhanced privacy protections) justify any means (regulation at any and every level of government)? For example, what would you think about having all 50 states creating their own Privacy Offices or Data Protection Bureaus that issued regulations or recommendations about Internet best practices?

What got me thinking about this was this new blog post by Parker Higgins of EFF, “California Attorney General Releases Mobile Privacy Recommendations.” In the essay, Higgins showers praise on California Attorney General Kamala D. Harris, who just released a document (“Privacy on the Go“) that lays out a long set of privacy “best practices” for mobile app developers. Higgins writes:

EFF applauds this important step forward, and congratulates the California Attorney General on a thorough and clearly written explanation of the importance of mobile privacy and how developers can deliver. It’s true that as technology changes, the specific needs and guidelines for companies will need to adapt. We could well see a time when these principles do not adequately protect the rights and needs of consumers. However, right now these principles represent a huge step forward — going beyond existing law in a way that improves transparency, accountability, and choice for users of mobile devices.

Regardless of the merits of the principles and recommendations contained in that report — and I agree that many of them are quite sensible best practices that industry should be following — I can’t help but wonder whether it is wise for EFF to be cheering on state-based Internet meddling so openly. Continue reading →

We don’t expect news reports to exhibit the tightest legal reasoning, of course, but Sunday’s New York Times story on location privacy made a runny omelet of some important legal issues relating to privacy.

The starting point is United States v. Jones, a case the Supreme Court decided last January. The Court held that government agents violated the Fourth Amendment when they attached a GPS tracking device to a vehicle without a warrant and used it to determine the location of a suspect for four weeks. Location information can be revealing.

“Some advocacy groups view location tracking by mobile apps and ad networks as a parallel, warrantless commercial intrusion,” says the story. A location privacy bill forthcoming from Senator Al Franken (D-MN) “suggests that consumers may eventually gain some rights over their own digital footprints.”

Jones was about government agents—their freedom of action specifically disabled by the Fourth Amendment—invading a recognized property right (in one’s car) to gather data. There is little analogy to location tracking by mobile devices, apps, and networks, which are privately provided, voluntarily adopted, and which violate no recognized right. Indeed, their tracking provides various consumer benefits. The Times piece equivocates between the government’s failure to get a legally required search warrant in Jones and uses of data that some may feel “unwarranted,” in the sense of being “uncalled for under the circumstances.”

The first line of Larry Downes’ new Cato Policy Analysis, “A Rational Response to the Privacy ‘Crisis’,” could have been written for the Times‘ sloppy analogy:

“What passes today as a ‘debate’ over privacy lacks agreed-upon terms of reference, rational arguments, or concrete goals,” Downes says. The paper examines how the “creepy factor” permeates privacy debates rather than crisp thinking and clear-headed examination.

It’s not that location tracking doesn’t generate legitimate privacy concerns. It does. People don’t know how location information is collected and used. They don’t always know how to stop its collection. And the future consequence of location information collected today is unclear. But the capacity of private actors to harm individuals with location data is limited. Their incentive to do so is even smaller. And avoiding location tracking is simply done (at significant costs to convenience).

As Downes’ piece illustrates, we’ve seen this kind of debate before, and we’ll see it again: A particular innovation spurs privacy concerns and a backlash (whipped by legislators and regulators). A negotiation between consumers and industry, facilitated by the news media, advocates, and a variety of other actors, produces the way forward. As often as not, the way forward is a partial or complete embrace of the technology and its benefits. Plenty of times, the threat never materializes ( see pervasive RFID).

Downes explores the legal explanation for what happens when consumers adopt new technologies that use personal information to produce custom content and services—this question of “rights over … digital footprints.” He finds that licensing is the best explanation for what is happening. When consumers use the many online services available to them, they license data that they might otherwise control.

The legal framework Downes puts forward sets the stage for iterative, contract-based development of rules for how data may be used in the information economy. It cuts against top-down dictates like Franken’s proposal to regulate future technologies today, knowing so little of how technology or society will develop.

Ultimately, no legislature can resolve the deep and conflicted cultural issues playing out in the privacy debate. Downes characterizes that debate as revealed tension between Americans’ Davey Crockett side—the privacy-protective frontiersmen—and our collective Puritanism. We are participants in and parts of a very watchful society.

It’s worth a read, Larry Downes’s “A Rational Response to the Privacy ‘Crisis’.”