While I harbor plenty of doubts about the wisdom or practicability of Do Not Track legislation, I have to cop to sharing one element of Nick Carr’s unease with the type of argument we often see Adam and Berin make with respect to behavioral tracking here. As a practical matter, someone who is reasonably informed about the scope of online monitoring and moderately technically savvy already has an array of tools available to “opt out” of tracking. I keep my browsers updated, reject third party cookies and empty the jar between sessions, block Flash by default, and only allow Javascript from explicitly whitelisted sites. This isn’t a perfect solution, to be sure, but it’s a decent barrier against most of the common tracking mechanisms that interferes minimally with the browsing experience. (Even I am not quite zealous enough to keep Tor on for routine browsing.) Many of us point to these tools as evidence that consumers have the ability to protect their privacy, and argue that education and promotion of PETs is a better way of dealing with online privacy threats. Sometimes this is coupled with the claim that failure to adopt these tools more widely just goes to show that, whatever they might tell pollsters about an abstract desire for privacy, in practice most people don’t actually care enough about it to undergo even mild inconvenience.
That sort of argument seems to me to be very strongly in tension with the claim that some kind of streamlined or legally enforceable “Do Not Track” option will spell doom for free online content as users begin to opt-out en masse. (Presumably, of course, The New York Times can just have a landing page that says “subscribe or enable tracking to view the full article.”) If you think an effective opt-out mechanism, included by default in the major browsers, would prompt such massive defection that behavioral advertising would be significantly undermined as a revenue model, logically you have to believe that there are very large numbers of people who would opt out if it were reasonably simple to do so, but aren’t quite geeky enough to go hunting down browser plug-ins and navigating cookie settings. And this, as I say, makes me a bit uneasy. Because the hidden premise here, it seems, must be that behavioral advertising is so important to supplying this public good of free content that we had better be really glad that the average, casual Web user doesn’t understand how pervasive tracking is or how to enable more private browsing, because if they could do this easily, so many people would make that choice that it would kill the revenue model. So while, of course, Adam never says anything like “invisible tradeoffs are better than visible ones,” I don’t understand how the argument is supposed to go through without the tacit assumption that if individuals have a sufficiently frictionless mechanism for making the tradeoff themselves, too many people will get it “wrong,” making the relative “invisibility” of tracking (and the complexity of blocking it in all its forms) a kind of lucky feature.
There are, of course, plenty of other reasons for favoring self-help technological solutions to regulatory ones. But as between these two types of arguments, I think you probably do have to pick one or the other.