Defining “Paternalism” Online

by on February 12, 2010 · 7 comments

Since some of my cobloggers have taken to using the phrase “Privacy Paternalists” to describe some advocates of privacy regulation, I want to suggest a distinction growing out of the discussion on Berin’s Google Buzz post below.

I think that it’s clear there is such a thing as a “privacy paternalist”—and there are not a few among folks I consider allies on other issues.  They’re the ones who are convinced that anyone who values privacy less highly than they do must be confused or irrational. A genuine privacy paternalist will say that even if almost everyone understands that Amazon keeps track of their purchases to make useful recommendations, this collection must be prohibited because they’re not thinking clearly about what this really means and may someday regret it.

There’s actually a variant on this view that I won’t go into at length, but which I don’t think should be classed as strictly paternalist.  Call this the “Prisoner’s Dilemma” view of privacy.  On this account, there are systemic consequences to information sharing, such that we each get some benefit from participating in certain systems of disclosure, but would all be better off if nobody did.  The merits of that kind of argument probably need to be taken up case-by-case, but whatever else might be wrong with it, the form of the argument is not really paternalistic, since the claim is that (most) individuals have a system-level preference that runs contrary to their preference for disclosure within the existing system.

The objections to Buzz, however, don’t really look like this. The claim is not that people’s foolish choices to disclose should be overridden for their own protection. The claim, rather, is that the system is designed in a way that makes it too easy to disclose information without choosing to do so in any meaningful way. Now, if I can log into your private database as user “J’ OR T=T”, you probably need to learn to set up SQL better.  But it is not terribly persuasive of me to argue that criticism of my breach is “paternalistic,” since after all you made your database accessible online to anyone who entered that login. It is substantially more persuasive if I have logged in as “guest” because you had enabled anonymous logins in the hope that only your friends would use them. On the Internet, the difference between protecting information from a user’s own (perhaps ill-advised) disclosure and protecting it from exploitation by an attacker ultimately, in practice, comes down to expectations. (The same is true in the physical world, though settled expectations make this less salient: Preventing me from getting into the boxing ring is paternalism; declaring the park a “boxing ring” by means of a Post-It note at the entrance is a pretext for assault.)

What expectations are reasonable ultimately has to be an empirical question.  If we want to establish whether a particular protocol for information sharing is meaningfully consensual, it is not especially helpful to set the bar by appeal to some a priori preference for thinking of people as “smart” or “stupid.”  We should actually try to find out: “When people click this button in this context, do they understand what they are agreeing to? Is the clarity of the notice commensurate with the potential consequences?”  If it turns out that many actual users are dismayed and angry about what they have supposedly “agreed” to, it ought to throw into serious doubt the premise that they have agreed at all. And especially when it is the users themselves complaining, paternalism seems like an odd label to apply. The very limited empirical data we have suggests that people generally do not have a very clear understanding of how information about them is being used or may be used. In the case of Buzz, I’m not entirely sure about what is shared with whom under what conditions given different settings—and I study privacy and tech for a living.

One might say that this is an unflattering or obnoxious observation to make because it implies that we’re all stupid and irresponsible, at least for some values of “stupid.”  Unfortunately, it does not therefore become less true. If people are genuinely concerned and confused, they do not become less so if you suggest that only stupid people would be concerned and confused. If you insist that they stop being concerned and confused, because their concern and confusion logically support the position of regulators, they will thank you for directing them toward a group of people who are operating with a more accurate model of the world and start writing checks to EPIC.

With all due respect to the Red Queen, I think the right approach here is verdict first, sentence afterward. First, let’s try to learn what people expect and believe about online privacy practices, what assumptions about time and cognitive capacity are reasonable, and so on. Maybe we need more time in this new space—the Internet has been around long enough, but interconnected social networking sites as a mass phenomenon are still relatively novel—so that users and sites can negotiate the right set of expectations, but it’d still be useful to have a way of tracking whether and how quickly this is actually happening.

Only after you’ve got this factual foundation is it even possible to define “paternalism” adequately in this context. Whether a rule is “paternalistic” can’t really be determined by looking at the rule itself, or even at the rule in combination with the beliefs and expectations a fully informed and perfectly rational being without time constraints might form. It depends what the facts about real people’s beliefs and expectations are. A rule based on too pessimistic a picture will be paternalistic in effect; one based on too sanguine a picture will fail to protect people from being abused.  An adequately protective rule, of course, need not be enforced by government. Privacy advocates and ordinary users can speak up and pressure firms to adopt better practices if they don’t want to lose market share. When the practices and expectations really are out of sync, this will work, and users will appreciate it. But they’re probably going to notice if it’s always the advocates of regulation who are drawing attention to genuine areas of concern, while libertarians predictably insist there are no infidels in Baghdad.

  • http://techliberation.com/author/berinszoka/ Berin Szoka

    I actually don't disagree, Julian—as I should hope would be clear by now after the long exchange we've had on my post from last night, which is now up to 45 comments (at least a dozen of them responses from me).

    First, you're quite right to stress that those of us who prefer market forces to regulatory solutions should be careful not to blast criticism as such, because criticism plays a critical role in the process of market discipline we're defending. I've tried to reiterate that point again and again every time I write about this subject. I think I did a better job of that in my initial post on this subject on Tuesday than in last nights' post, and for that, I'm sorry.

    Second, you're also quite right to warn that the flip-side of regulatory-minded privacy paternalism is something that the idea that, as you put it, “anyone who values privacy less highly than they do must be confused or irrational.” I assure you I don't actually subscribe to that idea, although—given my glib choice of title—I suppose I can't blame you for suggesting—none too obliquely—in the first sentence of your second paragraph that I do.

    But as I've emphasized repeatedly (here, here, here, here, in the Cyber-Libertarianism manifesto Adam and I drafted, and in my comments to the FTC Privacy Roundtable last November):

    granularity of control is a good thing and would move us closer towards the “ideal world” in which adults would be fully empowered to tailor speech and privacy decisions to their own values and preferences. Specifically, in an ideal world, adults (and parents) would have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information. Importantly, those tools and methods would give them the ability to not only block the things they don’t like—objectionable content, annoying ads or the collection of data about them—while also finding the things they want.

    Privacy is a highly subjective condition of personal control, and what doesn't bother me might bother you, either because you are simply more sensitive or because your personal circumstances are just different from mine. So I'm certainly not saying that the woman with the abusive ex-husband is “confused” or “irrational” for being upset by a certain data-sharing feature that I might find completely unobjectionable. This sort of subjectivism is a critical part of libertarianism: recognizing that every individual is unique and it is simply impossible to make objective one-size-fits judgments for how anyone should feel.

    But the story cannot end there, or we would have to design every tool around the most sensitive user. In practice, this would be a lot like a legal rule (such as handed down by the 11th Circuit last week, as I noted here) that said that every community in the country could define its own standards for “obscenity” and “indecency.” The practical effect would be that the most sensitive jurisdiction in the country would get to set standards for everyone else. Similarly, if we had to design every web tool so that it would be 100% safe for every user on the planet, it would be impossible to reach critical mass on any social networking tool because every user would have to opt-in (at least once if not two or three times) to every feature that could conceivably be controversial to even a single user.

    So once we move into questions of what “privacy by design” really means in terms of real-world user interface, I completely agree that the ultimate measure of a tool's privacy-friendliness has to come from testing consumer expectations. Of course, that kind of A/B testing is what the methodology upon which companies like Google build their entire enterprise. But it's entirely possible that Google might, nonetheless, seriously miscalculate based on testing their product internally (“eating your own dog food,” to use Google's term) among a user base that is either more tech-savvy or less privacy-concerned than the general population. It's of course also possible that a company like Google might decide to err on the side of sharing more because it will help them get up to scale very quickly with a service that could provide to be serious competitor for the dominant service in the field (say, Facebook, in this case).

    So, Julian, unless someone happens to leak the company's focus group testing for a newly launched product, how should we go about establishing the “factual foundation” that you rightly suggest we would need before we can define consumer expectations and therefore decide what would and would not be paternalistic. If you really wanted to be scientific about it, I suppose you'd want to run an unbiased focus group or other experiment to see how users actually respond when presented with the user interface offered by the company—without trying to influence the outcome through subtle framing effects. If it becomes clear that a significant percentage of users end up sharing information they really would not have shared when more clearly presented with controls over their information sharing, yes, the user interface probably needs to be tweaked to create an “adequately protective rule.”

    But what percentage is enough? How should we set a baseline for knowing what real users really want? Who's going to conduct such tests? And, more likely, in the absence of such tests, how are we really supposed to establish reasonable consumer expectations? Unfortunately, that endeavor necessarily means that someone has to impose a value judgment on what's really “reasonable,” even if we all agree on the profoundly subjective nature of privacy. So, instead of having a statistically valid sample of users, we end up pointing to a popular outcry in the blogosphere and very sympathetic anecdotes about the worst case scenario (e.g., the battered ex-wife in our case).

    Besides the question of sample size and whether the complaints we're seeing from bloggers and those who comments on blogs are really representative of the general population, there's the essentially intractable problem of trying to determine what the reaction to a product like this would be if you could actually query each user before their perception of the product were colored by the feeding frenzy on the topic in the blogosphere. Privacy criticism, to be effective in the marketplace of reputation, has to be loud. Combine that with the natural tendency of bloggers and traditional journalists towards sensationalism and the coverage that blows up around such stories give us a skewed perception of just what consumer expectations are. Much of the criticism is simply inaccurate, as I tried to point out in my response to several of Molly Wood's points yesterday. (At the same time, I'll also point out that I've learned things since last night that make me more uncomfortable about how Buzz was implemented—most notably the fact that the service seems to opt-in users who decline to opt-in the first time they loaded Gmail after Buzz's launch.)

    This is indeed a morass, but I'm not saying the matter is hopeless and that we should just give up and defer blindly to whatever Google or other companies might want to do. The answer I've tried to suggest again and again is offering users more control over their own privacy and educating them on how to use it. In some ways, Google has done a great job with that here, such as by allowing users complete granularity of control over which lists of users can see a particular post. Indeed, if anything, Google hasn't gone far enough with this approach, since adding it to Google Reader might have addressed the complaints of the battered ex-wife, who was understandably surprised to see her Google Reader shared items sent out to the Buzz followers automatically added for her even though she thought she hadn't signed up for the service and that she had previously made these shared items available only to a select list of contacts.

    So what does user empowerment really mean? In this case, there are two key, intersecting issues:
    1) The fact that Google Buzz auto-follows a user's most common contacts as “followees” (my term).</li>
    2) The fact that the Google Profile required to use Buzz displays, by default, a user's followers and followees.

    I don't think either one of these features is so privacy-invasive that it shouldn't be allowed. Again, I think it's just a matter of implementation. Both of these features have benefits for all users by making the site more useful and encouraging participation. I mean, it's only going to be worthwhile for me to invest time and effort in maintaining a third set of conversations about ongoing micro-news (in addition to Facebook and Twitter) if the site achieves a critical mass of participation from users who might simply not get involved if they started out with a completely empty Buzz Inbox. If auto-following gets my less tech-savvy friends and family engaged, that creates value for all of us. That's why I'm not willing to throw the baby out with the bath water.

    Instead, if Google did more to make these two facts more obvious to the user and easier to control, I think we could “square the circle” of trying to empower all users to implement their subjective privacy preferences (which might indeed seem irrational to other users). That might require that some of the privacy-sensitive users take some action to protect their own privacy, like unchecking the box next to “Show my followers and who's following me on my Google Profile” but I'm ok with that. These trade-offs are inherent in life. Someone will always have to expend some effort to customize a tool to their particular preferences. To suggest that we necessarily impose a restrictive default because we just can't expect anyone to expend that effort is what I referred to as “Privacy Paternalism.” I wish I were attacking a straw-man here, but this is precisely the view taken by EPIC repeatedly again and again in these debates, particularly where Google was concern.

    That said, the title of my post probably was a bit unfair in suggesting that anyone who didn't agree with me must be a “Privacy Paternalist.” This is a useful lesson in the dangers of trying to distill a nuanced concept like this one into a tweetable title. Reasonable minds can disagree about how much further Google ought to go to design a user interface that minimizes confusion, and empirical data would certainly help in that calculus. But at a certain point, if we think users just won't make the “right decision” no matter how clear the warnings are about the fact that they might be sharing more information than they mean to, we've moved beyond user empowerment and into pure privacy paternalism.

    So to sum up the longest blog comment I've ever written:
    1) Yes, by all means, let's criticize—once we've got our facts as straight.
    2) If we're not sure about how something works, it would be better to ask questions than leaping to conclusions.
    3) In this case, Google did a pretty lousy job of answering what should have been fairly foreseeable questions about how Google Buzz works, how it intersects with Google Profile, Google Reader, etc.
    4) Let's try to get the best sense of the data we can. What is it users are actually concerned about that doesn't comport with their expectations in context and that they consistently fail to exercise control over despite being given the ability to do so?
    5) Let's look for constructive suggestions on how Google can fix those problems
    6) Let's try to give Google some credit where credit is due for the privacy-enhancing aspects of Buzz.

    In the time it's taken me to write this, Google has announced that it may “end the marriage between Buzz and Gmail” in response to all these privacy concerns. On the one hand, that's comforting to me because it confirms my belief that the reputational marketplace does work to discipline even the biggest and supposedly smartest of technology companies.

    On the other hand, I'm afraid that this solution could end up substantially reducing the functionality of Buzz as something integrated with Gmail for those who liked that idea. That seems like a terrible case of throwing the baby out with the bathwater to give up on the integration concept compltely, despite its benefits, just because some users are so concerned about the auto-following issue. I would hope there could be less drastic ways of building privacy controls that work well enough to empower privacy-sensitive users, rather than reducing functionality for everyone.

  • http://www.surfmarketing.co.uk/ Web Design Kent

    I think gov't policy should be decided on what the people want and the for the good of the people. But sometimes what's good for the people is not really what people want, so someone must stand up to enforce such policy even if the people is against it.

  • http://www.surfmarketing.co.uk/ Web Design Kent

    I think gov't policy should be decided on what the people want and the for the good of the people. But sometimes what's good for the people is not really what people want, so someone must stand up to enforce such policy even if the people is against it.

  • Pingback: Vers une vie privée en réseau | traffic-internet.net

  • Pingback: wRDwgC1HqM wRDwgC1HqM

  • Pingback: 6pm code

  • Pingback: book of ra

Previous post:

Next post: