Privacy, Security & Government Surveillance

I was reading this Sun Magazine interview with the always-interesting Nick Carr and I liked what he had to say here about the public’s inconsistent views on privacy:

If you ask people whether they’re concerned about the ability of the government or corporations to gather information about them online, they’ll say yes. But if you look at how they behave online, they don’t display much fear of exposing themselves. What that says about people — and it’s true for most of us — is that we will readily forgo our privacy in exchange for convenient and useful services, particularly if they’re free. That’s a trade-off you make all the time on the Internet. Even if people were more conscious of how this information might be exploited, I doubt most would change their behavior.

This reminds me of the classic “hamburgers for DNA” quip from security expert Bruce Schneier who once famously noted that:

If McDonalds in the United States would give away a free hamburger for an DNA sample they would be handing out free lunches around the clock. So people care about their privacy, but they don’t care to pay for it. In the United States we have frequent shopper cards, which will track down people’s purchases for a 5 cents discount on a can of tuna fish. I don’t think you can convince the public to care about it.

Continue reading →

Over at Computerworld, Ben Rothke makes the case for “Why Information Must Be Destroyed.”  “Given the vast amount of paper and digital media that amasses over time,” he argues, “effective information destruction policies and practices are now a necessary part of doing business and will likely save organizations time, effort and heartache, legal costs as well as embarrassment and more.”  He continues:

Every organization has data that needs to be destroyed. Besides taxes, what unites every business is that they possess highly sensitive information that should not be seen by unauthorized persons.  While some documents can be destroyed minutes after printing, regulations may require others to be archived from a few years to permanently.  But between these two ends of the scale, your organization can potentially have a large volume of hard copy data occupying space as a liability, both from a legal and information security perspective.  Depending on how long you’ve been in business, the number of physical sites and the number of people you employ, it’s possible to have hundreds of thousands, if not millions, of pages of hard copy stored throughout your company — much of which is confidential data that can be destroyed.

He’s no doubt correct that it makes good business sense to routinely purge data — both physical and digital — to guard against theft, misplacement, leaks, abuse, or whatever else.  Of course, in the context of digital information, there are many folks who would like to see digital records purged more frequently to avoid growing concerns about online privacy.  I think most of those concerns are over-stated, but it can’t hurt to destroy most collected information after a certain period to play it safe and keep customers happy.

Problem is, as we discussed here last week, if some lawmakers in Washington get their way, it might be illegal to do that!  Quite obviously, data retention mandates are at odds with data destruction efforts.  [Mitch Wagner has more coverage of the data retention debate over at Information Week and he quotes my PFF colleague Sid Rosenzweig.]

And so begins another fight over data retention. As Declan summarizes:

Republican politicians on Thursday called for a sweeping new federal law that would require all Internet providers and operators of millions of Wi-Fi access points, even hotels, local coffee shops, and home users, to keep records about users for two years to aid police investigations. The legislation, which echoes a measure proposed by one of their Democratic colleagues three years ago, would impose unprecedented data retention requirements on a broad swath of Internet access providers and is certain to draw fire from businesses and privacy advocates. […] Two bills have been introduced so far — S.436 in the Senate and H.R.1076 in the House. Each of the companion bills is titled “Internet Stopping Adults Facilitating the Exploitation of Today’s Youth Act,” or Internet Safety Act.

Julian also has coverage over at Ars and quotes CDT’s Greg Nojeim who says the data retention language is “invasive, risky, unnecessary, and likely to be ineffective.”  I think that’s generally correct.  Moreover, I find it ironic that at a time when so many in Congress seemingly want online providers to collect and retain LESS data about users, this bill proposes that ISPs be required to collect and retain MORE data. One wonders how those two legislative priorities will be reconciled!!

Don’t get me wrong. It’s good that Congress is taking steps to address the scourge of child pornography — especially with stiffer sentences for offenders and greater resources for law enforcement officials. Extensive data retention mandates, however, would be unlikely to help much given the ease with which bad guys will likely circumvent those requirements using alternative access points or proxies.  Finally, retention mandates pose a threat to the privacy of average law-abiding citizens and impose expensive burdens of online intermediaries.

We’ve had more to say about data retention here at the TLF over the years.  Here’s a few things to read: Continue reading →

facebook-logoOn this episode “Tech Policy Weekly,” Technology Liberation Front contributors Ryan Radia and Berin Szoka join me for a discussion of the flare-up over Facebook’s recent changes to the data retention provisions of its Terms of Use agreement and whether there are any serious privacy issues in play here—or if this is all much ado about nothing. [Ryan blogged about it here, and I did here.]

Earlier this month, Facebook announced changes to the way it handled or retained user data on its site after a user quits Facebook, raising questions about who actually owns that data and whether any privacy issues were raised by the company’s new policy. Following some intense scrutiny in the blogosphere, Facebook decided this week to revert to their old terms of service until they figured out a new approach to data management and ownership.

You can begin listening by downloading the MP3 file here or by just clicking the play button below.  Or subscribe to our Podcast ( iTunes, other).

[display_podcast]

facebook-logoMuch like the Beacon incident before it, I have mixed feeling about this latest kerfuffle over Facebook’s changes to its privacy policy.

On one hand, I just don’t see what the big deal is. People act like Facebook is taking away all their “rights” or possessions, which is just silly. They were just clarifying how information would be used. In one sense, I feel like saying ‘Chill out. And if you don’t like Facebook’s policies, go use some other social networking site for God’s sake!’

On the other hand, I appreciate the fact that some people are far more sensitive about these things and are seeking to collectively pressure Facebook to change its approach to information use and ownership, and I’m fine with that. In fact, like the Beacon hullabaloo, it’s an example of what Berin Szoka and I have argued is the power of voluntary persuasion and social pressure to remedy privacy concerns before we call on government to adopt coercive, top-down, ham-handed, one-size-fits-all regulatory solutions. As we noted in our recent paper about the looming threat of online advertising regulation:

there are many indirect pressures and reputational incentives that provide an important check on the behavior of firms and the privacy policies they craft.  Just as the Internet increases the ways advertisers can reach audiences, it increases the power audiences have to influence advertisers.  For example, when Facebook introduced its Beacon program in 2007, which shared users’ online purchases with their friends without sufficient warning about how the program worked and the ability to opt-out of the program, the response was swift and effective:  Users “collectively raised their voices” and “the privacy pendulum [swung] back into equilibrium” [according to the Interactive Advertising Bureau.]  Within two weeks of the Beacon program being first deployed, Facebook had created an opt-out procedure.

Continue reading →

David Margolick has penned a lengthy piece for Portfolio.com about the AutoAdmit case, which has important ramifications for the future of Section 230 and online speech in general. Very brief background: AutoAdmit is a discussion board for students looking to enter, or just discuss, law schools. Some threads on the site have included ugly — insanely ugly — insults about some women.  A couple of those women sued to reveal the identities of their attackers and hold them liable for supposedly wronging them.  The case has been slowly moving through the courts ever since. Again, read Margolick’s article for all the details.  The important point here is that the women could not sue AutoAdmit directly for defamation or harassment because Section 230 of the Communications Decency Act of 1996 immunizes websites from liability for the actions of their users.  Consequently, those looking to sue must go after the actual individuals behind the comments which (supposedly) caused the harm in question.

I am big defender of Section 230 and have argued that it has been the cornerstone of Internet freedom. Keeping online intermediaries free from burdensome policing requirements and liability threats has created the vibrant marketplace of expression and commerce that we enjoy today. If not for Sec. 230, we would likely live in a very different world today.

Sec. 230 has come under attack, however, from those who believe online intermediaries should “do more” to address various concerns, including cyber-bullying, defamation, or other problems.  For those of us who believe passionately in the importance of Sec. 230, the better approach is to preserve immunity for intermediaries and instead encourage more voluntary policing and self-regulation by intermediaries, increased public pressure on those sites that turn a blind eye to such behavior to encourage them to change their ways, more efforts to establish “community policing” by users such that they can report or counter abusive language, and so on.

Of course, those efforts will never be fool proof and a handful of bad apples will still be able to cause a lot of grief for some users on certain discussion boards, blogs, and so on.  In those extreme cases where legal action is necessary, it would be optimal if every effort was exhausted to go after the actual end-user who is causing the problem before tossing Sec. 230 and current online immunity norms to the wind in an effort to force the intermediaries to police speech.  After all, how do the intermediaries know what is defamatory?  Why should they be forced to sit in judgment of such things?  If, under threat of lawsuit, they are petitioned by countless users to remove content or comments that those individuals find objectionable, the result will be a massive chilling effect on online free speech since those intermediaries would likely play is safe most of the time and just take everything down. Continue reading →

The Supreme Court building (thank Chief Justice Taft!)During my summer internship at CEI, a couple of us interns discussed the book Cato’s Robert Levy published last May, The Dirty Dozen: How Twelve Supreme Court Cases Radically Expanded Government and Eroded Freedom. We looked at Levy’s list of the worst decisions and sent each other lists of our own. Now that I’m taking ConLaw, I feel as though the time has come to post my lists of the twelve worst and the twelve best Supreme Court decisions of all time. It is by no means an exhaustive list. My inclusion of different cases than Levy does not indicate that I disagree with his assessment that those decisions are terrible – just maybe not as bad as the ones I select.

The Dirty DozenThe Worst:

  1. The Slaughter-House Cases (1873). The very worst decision ever made by the US Supreme Court. Eviscerated the 14th Amendment only five years after its adoption. It is best known for reading the Privileges or Immunities Clause, which was supposed to be (and could have been) a vehicle for both incorporation and unenumerated rights, out of the Constitution. But it also wrote out the Due Process Clause and the Equal Protection Clause, though those two clauses eventually crawled back into existence, to a degree.
  2. Katzenbach v. McClung (1964). It was tough to decide which of the various cases reading the Commerce Clause expansively enough to permit Congress to pass any law it desires, thus destroying the basis of the federal government as one of defined and limited powers to include. But McClung seems to be the most expansive in both its result and its holding.

What would it take to create a more secure Internet?  That’s what John Markoff explores in his latest New York Times article, “Do We Need a New Internet?”  Echoing some of the same fears Jonathan Zittrain articulates in his new book The Future of the Internet, Markoff wonders if online viruses and other forms of malware have gotten so out-of-control that extreme measures may be necessary to save the Net.  Compared to when cyber-security attacks first started growing over 20 years ago, Markoff argues that:

[T]hings have gotten much, much worse. Bad enough that there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over.

Like many others, Markoff fingers anonymity as one potential culprit:

The Internet’s current design virtually guarantees anonymity to its users. (As a New Yorker cartoon noted some years ago, “On the Internet, nobody knows that you’re a dog.”) But that anonymity is now the most vexing challenge for law enforcement. An Internet attacker can route a connection through many countries to hide his location, which may be from an account in an Internet cafe purchased with a stolen credit card. “As soon as you start dealing with the public Internet, the whole notion of trust becomes a quagmire,” said Stefan Savage, an expert on computer security at the University of California, San Diego.

Consequently, Markoff suggests that:

A more secure network is one that would almost certainly offer less anonymity and privacy. That is likely to be the great tradeoff for the designers of the next Internet. One idea, for example, would be to require the equivalent of drivers’ licenses to permit someone to connect to a public computer network. But that runs against the deeply held libertarian ethos of the Internet.

Indeed, not only does it run counter to the ethos of the Net, but as Markoff rightly notes, “Proving identity is likely to remain remarkably difficult in a world where it is trivial to take over someone’s computer from half a world away and operate it as your own. As long as that remains true, building a completely trustable system will remain virtually impossible.”  I’ve spent a lot of time writing about that fact here and won’t belabor the point other than to say that efforts to eliminate anonymity for the entire Internet would prove extraordinarily intrusive and destructive — of both the Internet’s current architecture and the rights of its users.  There’s just something about a “show-us-you-papers,” national ID card-esque system of online identification that creeps most of us out. That’s why I spend so much time fighting age verification mandates for social networking sites and other websites; it’s the first step down a very dangerous road.

But what if we could apply such solutions in a narrower sense?  That is, could we create more secure communities within the overarching Internet superstructure that might provide greater security?  Markoff starts thinking along those lines when he suggests… Continue reading →

Statue at FTC Headquarters: “Man Controlling Trade” (We’re rooting for the horse!)

Adam Thierer and I have just released a new PFF paper entitled “Targeted Online Advertising: What’s the Harm & Where Are We Heading?” (PDF) about the FTC’s new “Self-Regulatory Principles for Online Behavioral Advertising.”  Adam lampooned some of the attitudes at play in this debate in a great rant yesterday.

But we give the FTC credit for resisting calls to abandon self-regulation, and for its thoughtful consideration of the danger in stifling advertising-the economic engine that has supported a flowering of creative expression and innovation online content and services.  That said, we continue to have our doubts about the FTC’s approach, however-well intentioned:

  1. Where is this approach heading?  Will a good faith effort to suggest best practices eventually morph into outright government regulation of the online advertising marketplace?
  2. What, concretely, is the harm we’re trying to address?  We have asked this question several times before and have yet to see a compelling answer.
  3. What will creeping “co-regulation” mean for the future of “free” Internet services?  Is the mother’s milk of the Internet-advertising-about to be choked off by onerous privacy mandates?

We stand at an important crossroads in the debate over the online marketplace and the future of a “free and open” Internet. Many of those who celebrate that goal focus on concepts like “net neutrality” at the distribution layer, but what really keeps the Internet so “free and open” is the economic engine of online advertising at the applications and content layers. If misguided government regulation chokes off the Internet’s growth or evolution, we would be killing the goose that laid the golden eggs.

The dangers of regulation to the health of the Internet are real, but the ease with which government could disrupt the economic motor of the Internet (advertising) is not widely understood-and therein lies the true danger in this debate.  The advocates of regulation pay lip service to the importance of advertising in funding online content and services but don’t seem to understand that this quid pro quo is a fragile one: Tipping the balance, even slightly, could have major consequences for continued online creativity and innovation.

Continue reading →

So, the Federal Trade Commission (FTC) released its revised “Self-Regulatory Principles for Online Behavioral Advertising” today and it’s bound to generate a lot of commentary from those privacy advocates who seem to believe that we can never go far enough in regulating the flow of information online or limiting commercial marketing.  Berin Szoka and I will have a PFF paper out shortly [update: here it is] discussing the report in more detail, but for now I just wanted to mention one thing that peeves me about this report and the debate about online advertising in general.

The thing I find so intriguing about reports like this is the way that they implicitly assume that consumers are utterly helpless sheep who completely fail to understand how to protect their own privacy, to the extent those consumers are even sensitive about it at all. Specifically, there’s always this argument about how consumers don’t have “adequate notice” or “meaningful choice” when it comes to website privacy policies or how their information might be collected or used to serve up better ads.

Frankly, I think these concerns have been completely blown out of proportion by privacy zealots who would make just about any use of information, or effort to use it to target ads, a federal crime.  Worse yet, there’s a ‘something-for-nothing’ element to these debates that always irks me.  Some of these regulatory advocates seem to be under the impression that all these free Internet services and innovations fall to us like manna from heaven and that the good times will just keep on rollin’ right along even as they advocate regulations that would completely undercut the Internet’s primary economic engine: targeted advertising.

Regardless, here’s my little contribution to the movement toward simpler privacy policies to make sure web users understand what they are getting into and why they have to give a little to get a little. I want every Internet company to adopt the following privacy policy: Continue reading →