Tracking and Trade-Offs

by on December 7, 2010 · 9 comments

While I harbor plenty of doubts about the wisdom or practicability of Do Not Track legislation, I have to cop to sharing one element of Nick Carr’s unease with the type of argument we often see Adam and Berin make with respect to behavioral tracking here.  As a practical matter, someone who is reasonably informed about the scope of online monitoring and moderately technically savvy already has an array of tools available to “opt out” of tracking. I keep my browsers updated, reject third party cookies and empty the jar between sessions, block Flash by default, and only allow Javascript from explicitly whitelisted sites. This isn’t a perfect solution, to be sure, but it’s a decent barrier against most of the common tracking mechanisms that interferes minimally with the browsing experience. (Even I am not quite zealous enough to keep Tor on for routine browsing.) Many of us point to these tools as evidence that consumers have the ability to protect their privacy, and argue that education and promotion of PETs is a better way of dealing with online privacy threats. Sometimes this is coupled with the claim that failure to adopt these tools more widely just goes to show that, whatever they might tell pollsters about an abstract desire for privacy, in practice most people don’t actually care enough about it to undergo even mild inconvenience.

That sort of argument seems to me to be very strongly in tension with the claim that some kind of streamlined or legally enforceable “Do Not Track” option will spell doom for free online content as users begin to opt-out en masse. (Presumably, of course, The New York Times can just have a landing page that says “subscribe or enable tracking to view the full article.”) If you think an effective opt-out mechanism, included by default in the major browsers, would prompt such massive defection that behavioral advertising would be significantly undermined as a revenue model, logically you have to believe that there are very large numbers of people who would opt out if it were reasonably simple to do so, but aren’t quite geeky enough to go hunting down browser plug-ins and navigating cookie settings. And this, as I say, makes me a bit uneasy. Because the hidden premise here, it seems, must be that behavioral advertising is so important to supplying this public good of free content that we had better be really glad that the average, casual Web user doesn’t understand how pervasive tracking is or how to enable more private browsing, because if they could do this easily, so many people would make that choice that it would kill the revenue model.  So while, of course, Adam never says anything like “invisible tradeoffs are better than visible ones,” I don’t understand how the argument is supposed to go through without the tacit assumption that if individuals have a sufficiently frictionless mechanism for making the tradeoff themselves, too many people will get it “wrong,” making the relative “invisibility” of tracking (and the complexity of blocking it in all its forms) a kind of lucky feature.

There are, of course, plenty of other reasons for favoring self-help technological solutions to regulatory ones. But as between these two types of arguments, I think you probably do have to pick one or the other.

  • http://www.techliberation.com Adam Thierer

    Julian… First, welcome back to the TLF, even if it’s only to give me grief! And, even though I can’t understand why you didn’t just comment on my previous post, I welcome this input.

    Let me clarify a few things here. First, your analysis ignored the difference between blocking mechanisms that arise in the marketplace spontaneously versus those that might be mandated or encouraged by government. I have no idea how many people might opt-out of various online advertising schemes if Congress instituted a Do Not Track scheme, but I have to believe that it would be a hell of a lot more than might occur in the absence of such a mandate. After all, that’s why many advocates of privacy regulation favor such a mandate despite the existence of self-help tools that already block advertising and data collection.

    In this regard, I look at ad-blocking technologies in much the same way I view filtering technologies in the online child safety context: I have no problem with them when they privately deployed and the result of natural choices. When government mandates filters, however, it tilts the balance in favor or a specific outcome and skews markets in unnatural directions. The same is true for blocking tools and techniques in the privacy / advertising context.

    Second, I want to make it clear that I hold no brief for any particular online business or particular business model. I’m quite agnostic about things like paywalls, micropayments, subscriptions, and even online advertising. But the law in this case would clearly be stacking the deck against a particular business model (targeted advertising) implicitly favoring other types of advertising or business models. We often here the response from regulatory advocates that “your broken business model is not my problem.” Except that it is a problem when the law would breaks those business models preemptively by disallowing them.

  • http://jerrybrito.com Jerry Brito

    I think the core of Julian’s argument is this (believe it or not) one sentence: “So while, of course, Adam never says anything like “invisible tradeoffs are better than visible ones,” I don’t understand how the argument is supposed to go through without the tacit assumption that if individuals have a sufficiently frictionless mechanism for making the tradeoff themselves, too many people will get it “wrong,” making the relative “invisibility” of tracking (and the complexity of blocking it in all its forms) a kind of lucky feature.”

    That’s the Coase theorem he’s describing there. Given zero transactions cost, what would most folks do? Would they allow themselves to be tracked or not? Once you know the answer to that question you can pursue the policy that minimizes transactions costs. If the answer is that most people would opt out, then maybe we should have an opt-in regime, and if most people would opt in, them maybe we should have an opt-out regime. That’s an empirical question.

    I think what Adam’s getting at, and correct me if I’m wrong, is that when folks answer a poll that asks them if they mind being tracked or not, and most folks say they would opt out if only existing transactions costs weren’t so high, they are likely ignoring the trade-offs. If most people opted out, Facebook wouldn’t be free; it would be $20 a month. Facing that trade off, it’s arguable most people would not opt out.

    What Adam is arguing against is the government imposing one politically salable regime by fiat to artificially replace the regime that the market has developed organically. That is, the regime that has developed with consumers facing trade-offs. Make sense?

  • Geoffrey Manne

    It’s an error cost problem. Plenty of people would not opt-out of a government-mandated program, precisely because for many people the cost of learning which state of the world is better for them is greater than the presumed benefit. If this induces more people to remain behind the govt’s protective wall to their own detriment than would fail to protect themselves in a world of no mandate (also to their own detriment), then the policy would be a problematic one. As is always the case, the possibility of market solutions to the latter problem (if it is a problem) is likely larger than that of the solutions to the problem cause by a govt mandate. Admittedly, given the availability of opt-out and, I presume, the possibility of opt-back-in, this problem is decreased with this sort of govt mandate. But, as Adam points out, the problem on the business model side is greater, and there is little easy option for opt-out on the supply side (your NYT landing page example is at best a partial solution, as it requires action by viewers and leaves the webpage subject to the substantial risk that not enough viewers will indeed opt-out to support its business model).

  • http://twitter.com/binarybits Timothy Lee

    But if the choice were either tracking or $20/month Facebook, couldn’t Facebook present users with that choice explicitly? Users who visit Facebook could be presented with a page that says “You have the DNT header turned on! To continue, either turn it off or sign up for Facebook Premium.” If it’s true that consumers really don’t value their privacy $20/month, then they’ll turn off the DNT header. Otherwise, they’ll pony up the $20. Either way, the user seems to win.

    My problem with DNT is that I don’t think the FTC has really thought through the definition of “track.” Any definition stringent enough to constrain the kind of behavioral tracking people find sinister will also burden lots of small websites doing stuff like site analytics and personalization that most people find unobjectionable. But I agree with Julian: it’s not much of an objection to say that if we give users more control they’ll make decisions we don’t agree with.

  • http://jerrybrito.com Jerry Brito

    First, I agree with you on the definition of “track.” To your main point:
    Yes, I agree that it would be great if Facebook presented its users with an
    option to track or pay. The question is this: if Facebook knew that the vast
    majority of its users would indeed pay (and remember, $20 is just a figure I
    made up) instead of being tracked, wouldn’t they offer that choice? If
    they’re not offering that choice, it tells us something about what they
    think their users would do. (Also remember that right now users have the
    option to not use the site at all.) So when you say, “I agree with Julian:
    it’s not much of an objection to say that if we give users more control
    they’ll make decisions we don’t agree with,” you’re right that that makes
    for a weal argument, but what if you replace “give” with “force providers to
    give”?

  • http://www.techliberation.com Adam Thierer

    Allow me to clarify one more time since both Julian and now you are putting words in my mouth, or at least assuming I am making an argument I never have. When you say, “I agree with Julian: it’s not much of an objection to say that if we give users more control they’ll make decisions we don’t agree with,” you are implying that I do not agree with a consumer’s decision to block certain sites or “tracking” activities because of the impact it will have on online content and services. Again, I don’t give a damn what consumers do and I am not against them exercising their choice to block sites or services.

    What I have raised as an objection is the idea of the government artificially tilting markets in that direction by facilitating blocking, and I have outlined some of the trade-offs implicit in any move to do so. Again, go back to my content filtering example. I have no problem with voluntary choices by web surfers to block access to sites of content they might find objectionable (porn, hate speech, etc). I do, however, have a problem with the government mandating or subsidizing filtering to facilitate that goal and I would, as part any analysis of such a regulatory scheme, point out that it could lead to a variety of unintended consequences. That doesn’t mean I am somehow anti-choice, rather, I am anti-intervention as it pertains to government mandates made in the name of expanding choice because the government’s thumb on the scales could distort the organic, experimental evolution of these markets.

    By way of comparison, I’m fine with what Microsoft announced yesterday in terms of new IE9 functionality that could allow others to craft blocking schemes, for example. Should things go wrong under that voluntary system, error detection and correction is far more likely. The interaction of market actors in that scenario would likely lead to a very different equilibrium than what we might expect under a legally-mandated blocking / filtering regime.

  • Jim Harper

    I think the best way to take this post is as an appeal to argue our points here more carefully.

    It takes a great deal of care and subtlety to argue for market outcomes while parrying regulation that may be aimed at the same goals markets would reach (but that would inevitably distort). “Do-not-track” is a probable regulatory morass, but tools that better allow people to avoid tracking are laudable.

    We must recognize the difference between arguing for a particular outcome and arguing that law and regulation should force that outcome. And we should avoid hyperbole and accusations of hyperbole unless we can cite it.

  • Jim Harper

    It could be that you’re a controversialist, Tom, and not something intrinsic to the issues.

  • Tom Sydnor

    Jim, no, I am not a controversialist. Quite the contrary: As here, I try hard to pick my battles carefully enough to ensure that I rarely receive the sort of coherent and substantive reply that Julian failed to provide to my critique of his argument or his oh-so-hip-and-trendy invocation of Tor.

    Indeed, a critical characteristic tends to make real controversialists easily identifiable: Because they care more about picking fights than being right, true controversialists tend to pick lots of fights—and then run away from most of them. Review all the comments in this thread to which Julian dared not even reply. Julian is the controversialist, not I.

Previous post:

Next post: