A Response to Nick Carr on Privacy & Trade-Offs

by on December 7, 2010 · 1 comment

This is a response to Nick Carr’s recent piece, “The Attack on Do Not Track,” in which he goes after me for some comments I made in this essay about the trade-offs at work in the privacy and online advertising debates.  In his critique of my essay, he argues:

What the FTC is suggesting is that the unwritten quid pro quo be written, and that the general agreement be made specific. Does Thierer really believe that invisible tradeoffs are somehow better than visible ones? Shouldn’t people know the cost of “free” services, and then be allowed to make decisions based on their own cost-benefit analysis? Isn’t that the essence of the free market that Thierer so eloquently celebrates?

My response to Nick follows.

Nick…  Did I anywhere suggest that “invisible tradeoffs are somehow better than visible ones?” I can’t remember saying that anywhere, so perhaps you can point to where I did.  I don’t think you’ll find anything when you conduct your search since I know for a fact that I have never suggested such a thing.

That being said, strict contracting and consent models are not always possible in a free market economy, even if they are ideal.  In essence, much of the history of advertising and marketing is built on the sort of “unwritten quid pro quos” you deride in your essay.  Are you against radio or television advertising on similar grounds? Print ads? Direct mail?  Billboards?  There are steps you can take to avoid advertising and marketing in those contexts, but few of us would expect any sort of formal contact and consent form to be delivered to our attention beforehand.  And opt-ing out of them entirely is very difficult.  So, while I agree that, generally speaking, “people [should] know the cost of ‘free’ services, and then be allowed to make decisions based on their own cost-benefit analysis,” let’s understand that such ideal textbook models of perfect information and informed consent aren’t always possible.

I will admit, however, that the difference with online advertising is that personal information may be collected about the consumer of the advertising in question.  That did not always occur as part of those previous advertising “quid pro quos.”  Understandably, this raises the blood pressure of those who want to “property-tize” personal information and, in essence, apply a copyright-like permissions-based regime to any collection or reproduction of such information.  Such an information control regime will be challenging to enforce, especially in light of the significant amounts of personal information that we voluntarily place online about ourselves.  [See my earlier essay, “Privacy as an Information Control Regime: The Challenges Ahead” for further discussion.]

Nonetheless, an ideal world would be one in which trade-offs were more visible and consent / contracting was easier, whether we are talking about privacy, copyrighted material, or anything else.  For example, in the context of online child safety and potentially objectionable media content, I have long argued that:

The ideal state of affairs, therefore, would be a nation of fully empowered parents who have the ability to perfectly tailor their family’s media consumption habits to their specific values and preferences. Specifically, parents or guardians would have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information. Importantly, those tools and methods would give them the ability to not only block objectionable materials, but also to more easily find content they feel is appropriate for their families.

My former colleague Berin Szoka has applied this same ‘ideal world’ model to privacy in this filing to the Federal Trade Commission:

In an ideal world, adults would be fully empowered to tailor privacy decisions, like speech decisions, to their own values and preferences (“household standards”).  Specifically, in an ideal world, adults (and parents) would have (1) the information necessary to make informed decisions and (2) the tools and methods necessary to act upon that information.  Importantly, those tools and methods would give them the ability to block the things they don’t like—annoying ads or the collection of data about them, as well as objectionable content.

Again, this would move us close to an explicit contracting / consent regime for the media content in question in both cases.  Is it desirable? You bet.  Is it possible?  Likely not.  Can we strive to get closer to the ideal state?  Yes, but not without costs. And that’s the key point I was trying to get across in my earlier essay on Do Not Track.  The trade-offs here are real and could be quite profound for online content and culture.   If we move toward a more rigorous information control regime to restrict personal information flows in the name of protecting privacy, we should not be surprised when that trade-off becomes more explicit–and expensive.

One final point.  You argue that “the suggestion that people shouldn’t be allowed to make informed choices about their privacy because some businesses may suffer as a result of those choices is ludicrous and even offensive.”  Again, I’ve already said that we can strive for more and better informed consent models, but you are pretending here it’s far simpler than it is in reality.  And I’ve already noted that the important point here is not protecting businesses, per se, but rather understanding that online content and culture is currently primarily subsidized by advertising business models that will be forcibly broken by regulation, and that we should consider the trade-offs that entails.  Finally, is there any role for personal responsibility in your view?  After all, there are steps that websurfers can take to address unwanted advertising and data collection techniques. Here’s a short list of privacy solutions that my former PFF colleagues put together.  If we expect consumers to exercise some personal responsibility to avoid unwanted content or communications in the free speech / online child safety context, why not here in the privacy context as well?

Previous post:

Next post: