Over this past week, a lot of people were making hay over this recent ReadWriteWeb story, “Facebook’s Zuckerberg Says The Age of Privacy is Over.” Seems that some people were taking issue with Facebook founder Mark Zuckerberg’s suggestion that Facebook’s recent site policy changes, which generally encouraged more sharing or information, were in line with public expectations. Most people put words in Zuckerberg’s mouth and accused him of saying that “privacy is over” or that he claimed he “is a prophet,” neither of which he actually said. But let’s ignore the fact that some people made stuff up and get back to the point: What set people off about Facebook’s recent site changes and Zuckerberg’s rationalization of them?
I think it goes back to the fact that a lot of people want to have their cake and eat it too. “It is the paradox of the cyber era,” notes Washington Post columnist Michael Gerson: We are “a nation of exhibitionists demanding privacy.” Indeed, that’s true, but there’s a good reason why this so-called “privacy paradox” exists. As Larry Downes, author of the brilliant new book, The Laws of Disruption, argues:
People value their privacy, but then go out of their way to give it up. There’s nothing paradoxical about it. We do value privacy. It’s just that we’re willing to trade it for services we value even more. Consumers intuitively look at the information being requested and decide whether the value they receive for disclosing it is worth the cost of their privacy. (p. 80)
That’s exactly right. When confronted with real world choices about privacy and information sharing, we often are willing to accept some trade-offs in exchange for something of value. But when we are asked about this process we are loathe to admit that we would willingly engage in such privacy-for-services trade-offs even if we do it every day of our lives. As Michael Arrington of TechCrunch rightly points out:
the rest of us seem to be ok with Gmail. And our phone. That’s because the benefits of those products far outweigh the privacy costs. And people are going to be just fine with Facebook, too.
And he notes there are other examples of where people seemingly make these trade-offs every day, even if it seems illogical to others why they would do so.
The most articulate counter-argument to all this comes from Michael Zimmer, an assistant professor in the School of Information Studies at the University of Wisconsin-Milwaukee, who says:
Users want to be able to control what information they provide and to whom it is visible. That’s the essence of privacy, and it’s still very much in demand. That doesn’t make one a Luddite. It makes one a responsible user of information technology.
Well, I can generally agree with all that, but the question is what it means in practice. After all, we’re all in favor of giving consumers more choices and empowering them to make decisions for themselves. Berin Szoka and I have again and again and again argued that:
In an ideal world, adults would be fully empowered to tailor privacy decisions, like speech decisions, to their own values and preferences (“household standards”). Consumers would have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information. Importantly, those tools and methods would give them the ability to block the things they don’t like—annoying ads or the collection of data about them, as well as objectionable content—while also helping them find the information and content they desire.
But let’s be clear about something. Even as we move closer to this ideal state, there are still many citizens who will choose to never take advantage or privacy-enhancing tools and will never read a single privacy policy. But if you ask most of those people in a random survey, “Do you care about your privacy?” what do you think they are going to say? Well, it should be as obvious as what the answer would be to a poll question like: “Do you love your mother?” Yes, of course we do! But, again, how does that translate to real-world behavior? More importantly, what are the ramifications for public policy?
Last month, I sat on a panel about polls and privacy expectations at the Federal Trade Commission’s December 7th workshop on “Exploring Privacy.” I argued that, while privacy polls and surveys may offer us some interesting insights into how some in the public think about advertising and privacy in the abstract, ultimately, polls and surveys are no substitute for real-world experiments in which people make real choices, in real time, often with real money, and face many real trade-offs. [See Berin’s paper on this issue.]
I also argued that privacy is a highly subjective condition and that consumers are empowered with many real privacy controls such that they can make the privacy choices that are right for them. [See this ongoing series and this paper.] Moreover, it remains unclear what the harms are that privacy regulatory advocates are really trying to protect us against. For these reasons, I argued that rational ignorance may often be at work since many consumers likely won’t feel the need to read privacy policies or take steps to “protect their privacy” online. Or, people just implicitly accept the fact that they are getting something of value even if it means they might also be sharing some information about themselves with others.
Which brings us back to Zuckerberg’s comments. People expressed outrage — even if he didn’t say the things they accused him of saying — and many rushed to claim that privacy is still alive and well and worthy of protection, even if it means an onerous federal data regulation regime. But I wonder… how many of those people left Facebook or changed their behavior in any other way after they expressed that outrage? I suspect most people went right along with their lives and probably jumped right back on Facebook and starting sharing even more about themselves with the world.