Defining “Paternalism” Online

by on February 12, 2010 · 7 comments

Since some of my cobloggers have taken to using the phrase “Privacy Paternalists” to describe some advocates of privacy regulation, I want to suggest a distinction growing out of the discussion on Berin’s Google Buzz post below.

I think that it’s clear there is such a thing as a “privacy paternalist”—and there are not a few among folks I consider allies on other issues.  They’re the ones who are convinced that anyone who values privacy less highly than they do must be confused or irrational. A genuine privacy paternalist will say that even if almost everyone understands that Amazon keeps track of their purchases to make useful recommendations, this collection must be prohibited because they’re not thinking clearly about what this really means and may someday regret it.

There’s actually a variant on this view that I won’t go into at length, but which I don’t think should be classed as strictly paternalist.  Call this the “Prisoner’s Dilemma” view of privacy.  On this account, there are systemic consequences to information sharing, such that we each get some benefit from participating in certain systems of disclosure, but would all be better off if nobody did.  The merits of that kind of argument probably need to be taken up case-by-case, but whatever else might be wrong with it, the form of the argument is not really paternalistic, since the claim is that (most) individuals have a system-level preference that runs contrary to their preference for disclosure within the existing system.

The objections to Buzz, however, don’t really look like this. The claim is not that people’s foolish choices to disclose should be overridden for their own protection. The claim, rather, is that the system is designed in a way that makes it too easy to disclose information without choosing to do so in any meaningful way. Now, if I can log into your private database as user “J’ OR T=T”, you probably need to learn to set up SQL better.  But it is not terribly persuasive of me to argue that criticism of my breach is “paternalistic,” since after all you made your database accessible online to anyone who entered that login. It is substantially more persuasive if I have logged in as “guest” because you had enabled anonymous logins in the hope that only your friends would use them. On the Internet, the difference between protecting information from a user’s own (perhaps ill-advised) disclosure and protecting it from exploitation by an attacker ultimately, in practice, comes down to expectations. (The same is true in the physical world, though settled expectations make this less salient: Preventing me from getting into the boxing ring is paternalism; declaring the park a “boxing ring” by means of a Post-It note at the entrance is a pretext for assault.)

What expectations are reasonable ultimately has to be an empirical question.  If we want to establish whether a particular protocol for information sharing is meaningfully consensual, it is not especially helpful to set the bar by appeal to some a priori preference for thinking of people as “smart” or “stupid.”  We should actually try to find out: “When people click this button in this context, do they understand what they are agreeing to? Is the clarity of the notice commensurate with the potential consequences?”  If it turns out that many actual users are dismayed and angry about what they have supposedly “agreed” to, it ought to throw into serious doubt the premise that they have agreed at all. And especially when it is the users themselves complaining, paternalism seems like an odd label to apply. The very limited empirical data we have suggests that people generally do not have a very clear understanding of how information about them is being used or may be used. In the case of Buzz, I’m not entirely sure about what is shared with whom under what conditions given different settings—and I study privacy and tech for a living.

One might say that this is an unflattering or obnoxious observation to make because it implies that we’re all stupid and irresponsible, at least for some values of “stupid.”  Unfortunately, it does not therefore become less true. If people are genuinely concerned and confused, they do not become less so if you suggest that only stupid people would be concerned and confused. If you insist that they stop being concerned and confused, because their concern and confusion logically support the position of regulators, they will thank you for directing them toward a group of people who are operating with a more accurate model of the world and start writing checks to EPIC.

With all due respect to the Red Queen, I think the right approach here is verdict first, sentence afterward. First, let’s try to learn what people expect and believe about online privacy practices, what assumptions about time and cognitive capacity are reasonable, and so on. Maybe we need more time in this new space—the Internet has been around long enough, but interconnected social networking sites as a mass phenomenon are still relatively novel—so that users and sites can negotiate the right set of expectations, but it’d still be useful to have a way of tracking whether and how quickly this is actually happening.

Only after you’ve got this factual foundation is it even possible to define “paternalism” adequately in this context. Whether a rule is “paternalistic” can’t really be determined by looking at the rule itself, or even at the rule in combination with the beliefs and expectations a fully informed and perfectly rational being without time constraints might form. It depends what the facts about real people’s beliefs and expectations are. A rule based on too pessimistic a picture will be paternalistic in effect; one based on too sanguine a picture will fail to protect people from being abused.  An adequately protective rule, of course, need not be enforced by government. Privacy advocates and ordinary users can speak up and pressure firms to adopt better practices if they don’t want to lose market share. When the practices and expectations really are out of sync, this will work, and users will appreciate it. But they’re probably going to notice if it’s always the advocates of regulation who are drawing attention to genuine areas of concern, while libertarians predictably insist there are no infidels in Baghdad.

Previous post:

Next post: