Privacy, Security & Government Surveillance

Internet policy Shame Artist extraordinaire Chris Soghoian has struck again! Chris recently shamed the online advertising industry into improving their privacy practices with his Targeted Advertising Cookie Opt-Out (TACO) plug-in for Firefox. Now Chris has set his sight on the security practices of cloud service providers.

A letter released this morning, signed by 37 leading online security experts (and organized by Chris), calls on Google to offer persistent SSL (HTTPS) encryption by default for all Google servicesor at the very least, to make more visible the option currently given to users to opt-in to use SSL for all communications. Google, in its response, indicated that it was already “looking into whether it would make sense to turn on HTTPS as the default for all Gmail users.”

While Google’s response identifies some clear problems with implementing persistent SSL for all users (esp. connection speed), few would deny that it makes sense for webmail providers to encrypt all traffic using SSL, rather than sending email data “in the clear,” which risks interception by hackers. We at PFF hold no brief for Google, in fact we have found ourselves disagreeing with them on many other occasions on a range of issues (most notably net neutrality mandates). Nonetheless, on this front, Google has long been a leader, having offered SSL since Gmail launched and having begun providing the persistent HTTPS option last summer while most of their competitors still use SSL only for the initial authentication that occurs when a user first signs in. While the letter focuses on Google and webmail in particular, this issue has far broader implications for all online cloud service providers.

No Free Lunch: The Costs of Encryption Gmail, Yahoo! Mail, Hotmail, etc. are, of course, “free” ( i.e., ad-supported). Google in particular has lead the way in increasing the functionality offered in Gmail, not just constantly increasing the total storage space provided to every user (now over 7GB), but regularly adding innovative new features—at no charge to users. Continue reading →

The Department of Homeland Security’s Privacy Office sez:

On his first full day in office, President Obama directed his administration to seek an “unprecedented level of openness in government.” In the spirit of openness and transparency consistent with the directives of the administration and with her personal philosophy, the Chief Privacy Officer of the Department of Homeland Security would like to engage in quarterly updates on privacy activities in the Department for the privacy advocacy community. The inaugural Privacy Information for Advocates (PIA) will be held in person on Friday, June 19, 2009, in the DHS Privacy Office located at 1621 N. Kent Street, Suite 900 in Rosslyn, VA. The update will begin at 3:00 pm. If you plan to attend, please RSVP to Lynn Parker at Lynn[dot]Parker1[at]dhs[dot]gov before noon on Wednesday, June 17. RSVPs are required in order to confirm participation.

I have a quibble with the acronym – heh heh – “PIA” is also the acronym for “Privacy Impact Assessment.” But if you carefully use context to discern meaning, you’ll probably figure out when people are talking about the meeting versus when they are talking about the document.

But people who are not “in the know” won’t understand the difference, and as to them your power and authority will rise until you reach the status of privacy demi-god.

Oh, whatever. Just go to the meeting.

At CFP Today

by on June 2, 2009 · 7 comments

I’ll be speaking on a panel titled “The Future of Security vs. Privacy” today at the Computers Freedom and Privacy conference. If you’re in Washington, D.C., come on by the Marvin Center at George Washington University and head up to the third floor. The conference continues through the week.

The organizers say C-SPAN will be recording parts of today, and it is supposed to be streamed live here. You Twitterers can follow the conversation by checking out the official hashtag: #cfp09. Be sure to say your piece, as well.

Vision of the Anointed book coverBerin recently encouraged me to re-read Thomas Sowell’s The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy, which I hadn’t looked at since I first read it back in 1995 or 96.   I’m glad I did since Sowell’s work has always been profoundly influential on my thinking (especially his masterpiece, A Conflict of Visions) and I had forgotten how useful The Vision of the Anointed was in helping me understand the reoccurring model that drives ideological crusades to expand government power over our lives and economy.

“The great ideological crusades of the twentieth-century intellectuals have ranged across the most disparate fields,” Sowell noted in the book.  But what they all had in common, he argued, was “their moral exaltation of the anointed above others, who are to have their different views nullified and superseded by the views of the anointed, imposed via the power of government.” (p. 5)  These elitist, government-expanding crusades shared several key elements, which Sowell identified as follows:

  1. Assertion of a great danger to the whole society, a danger to which the masses of people are oblivious.
  2. An urgent need for government action to avert impending catastrophe.
  3. A need for government to drastically curtail the dangerous behavior of the many, in response to the prescient conclusions of the few.
  4. A disdainful dismissal of arguments to the contrary as either uninformed, irresponsible, or motivated by unworthy purposes.

You can see this model at work on a daily basis today with our government’s various efforts to reshape our economy, but I think this model is equally applicable to debates over social policy and speech control.  In particular, the various “technopanics” I have been writing about recently fit this model. (See 1, 2, 3, 4, 5).  For example, consider how this plays out in the debate over online social networking:

Continue reading →

Recall a couple of years ago when I lauded Google – and also picked on them – for making customer data “more anonymous”?

“‘Anonymous’ is correctly regarded as an absolute condition,” I wrote. “Like pregnancy, anonymity is either there or it’s not. Modifying the word with a relative adjective like ‘more’ is a curious use of language.”

The challenge of these concepts – “anonymized” or “de-identified” data – is still around, and it’s still a difficult one.

Here’s a sophisticated take on the question:

Information is increasingly difficult to classify as “identified” or “de-identified,” particularly as it is copied, exchanged, or recombined with other information. With rapidly evolving technologies and databases, it is more appropriate to describe a spectrum of “identifiability,” rather than a binary classification of information as identifiable or not. The question could then become not whether deidentified information might be made re-identifiable, but rather which entities would be able to re-identify the information, how much effort they would have to expend, and what limits are placed on their doing so.

And here’s an advocacy group apparently lacking that sophistication. They treat information as flatly “de-identified” in a legal filing about a New Hampshire law that bans the sale of prescription drug data for marketing purposes:

[T]he Prescription Information Law does not implicate patient privacy. While it purports to protect privacy interests, the statute regulates patient de-identified information.

Here’s the thing: Both quotes were issued by the Center for Democracy and Technology. Continue reading →

I’m reading a couple of interesting books right now [see my Shelfari list here] including Guarding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy by Lawrence Friedman of the Stanford School of Law.  The book examines the legal and social norms governing privacy, reputation, sex, and morals over the past two centuries.  It’s worth putting on your reading list. [Here’s a detailed review by Neil Richards.]  I might pen a full review later but for now I thought I would just snip this passage from the concluding chapter:

In an important sense, privacy is a modern invention. Medieval people had no concept of privacy.  They also had no actual privacy. Nobody was ever alone. No ordinary person had private space.  Houses were tiny and crowded.  Everyone was embedded in a face-to-face community. Privacy, as idea and reality, is the creation of a modern bourgeois society.  Above all, it is a creation of the nineteenth century.  In the twentieth century it became even more of a reality. [p. 258]

In a time when amorphous “rights” to privacy seem to be multiplying like wildflowers, this is an important insight from Friedman.  In my opinion, many of the creative privacy theories being concocted today are often based on false nostalgia about some forgotten time in the past when we supposedly all had our own little quiet spaces that were completely free from privacy intrusions.  But as Friedman makes clear, this is largely a myth.  It’s not to say that there aren’t legitimate issues out there today.  But it’s important that we place modern privacy issues in a larger historical context and understand how many of today’s concerns pale in comparison to the problems of the past.

[Note: If you’re interested in this topic, you’ll also want to read Daniel Solove’s The Future of Reputation: Gossip, Rumor, and Privacy on the Internet.  Also, here’s Jim Harper’s review of it.]

Lee Gomes writes on Fobes.com with a clear-eyed reminder that privacy regulation has been costly, yet failed to deliver. Lovers of government intervention will, of course, take this as an argument to double-down.

The Computers Freedom & Privacy conference is consistently one of the most interesting and forward-looking privacy conferences. This year, it’s at George Washington University in Washington, D.C. June 1-4.

I helped organize it this time, though by no means does the event skew libertarian. What it does is bring together people of all ideologies to discuss common concerns about the present and future state of privacy.

I’ll be speaking on a panel called “The Future of Security vs. Privacy” on Tuesday, June 2nd. Here’s the program page. And here’s the registration page if any of this whets your appetite.

NebuAd is Dead

by on May 19, 2009 · 14 comments

NebuAd is dead. The company‘s plan to track users through their ISPs for the purpose of targeting advertising met with public and congressional concern that ultimately led to its demise.

I believe that ISPs should stick to serving bits and not get into the business of serving or helping to serve ads, so I’m glad to see NebuAd’s model fail. I’ve been made aware by a similar company – Phorm – of the privacy sensitivity they design into their system, but the answer for me is still “No, thanks.”

In terms of policy, this story is mixed. Fans of government involvement probably believe that concerns expressed by public authorities caused NebuAd’s partners to pull out. ISPs also responded to public concerns expressed directly and in the media, of course, and I believe that consumers’ passive reliance on government authorities for protection is in error.

Facebook has been at the center of a controversy involving its moderation policies and The Pirate Bay, a popular Bittorrent tracker that was found guilty of copyright infringement by a Swedish court last month. Since early April, Facebook has enforced a “site-wide” ban on links to The Pirate Bay – including those in private messages.wire_tapping_07

This practice may run afoul of federal wiretapping statutes that bar service providers from “intercepting” private messages, according to an article that appeared on Wired Threat Level last week. Wired quotes Kevin Bankston, a senior attorney for the Electronic Frontier Foundation, who explains that Facebook’s filtering raises “serious questions about whether Facebook is in compliance with federal wiretapping law.”

It’s important to draw a distinction between the traditional notion of “wiretapping” and Facebook’s “interception” of user messages, which doesn’t involve any human intervention. Regardless of how the courts may interpret ancient laws like the 1986 Electronic Communications Privacy Act, an automated computer system flagging and deleting certain strings from user messages simply isn’t comparable to a third party secretly listening in on a private phone conversation.

Besides, Facebook makes clear to its users from the get-go that their messages and postings are subject to a set of rules (which Facebook lays out in plain English). If Facebook believes a message or posting is against the rules, it can block or remove it. This is not an unreasonable rule; many online discussion forums have enforced similar policies since the Web’s early days. Such filtering is possible only if sites can “examine” messages to identify misconduct.

Continue reading →