Privacy Solutions (Part 1): Introduction

by on September 5, 2008 · 10 comments

By Adam Thierer & Berin Szoka

Whatever ordinary Americans actually think about online privacy, it remains a hot topic inside the Beltway. While much of that amorphous concern focuses on government surveillance and government access to information about web users, many in Washington have focused on targeted online advertising by private companies as a dire threat to Americans’ privacy — and called for prophylactic government regulation of an industry that is expected to more than double in size to $50.3 billion in 2011 from $21.7 billion last year.

In 1998, when targeted advertising was in its infancy, the FTC proposed four principles as the basis for self-regulation of online data collection: notice, choice, access & security. In 2000, the Commission declared that too few online advertisers adhered to these principles and therefore recommended that Congress mandate their application in legislation that would allow the FTC to issue binding regulations. Subsequent legislative proposals (indexed by CDT by Congress here along with other privacy bills) have languished in Congress ever since. During this time self-regulation of data collection (e.g., the National Advertising Initiative) has matured, the industry has flourished without any clear harm to users and the FTC has returned to its original support for self-regulation over legislation or regulatory mandates.

But over the last year, the advocates of regulation have succeeded in painting a nightmarish picture of all-invasive snooping by online advertisers using more sophisticated techniques of collecting data for targeted advertising. The Federal Trade Commission (FTC) has responded cautiously by proposing voluntary self-regulatory guidelines intended to address these concerns, because the agency recognizes that this growing revenue stream is funding the explosion of “free” (to the user) online content and services that so many Americans now take for granted, and that more sophisticated targeting produces ads that are more relevant to consumers (and therefore also more profitable to advertisers).

The Hill has responded by holding hearings, sending out angry letters to online advertisers, and demanding that ISPs cease experimenting with a new form of online behavioral advertising (OBA) based on packet inspection. Some in the think tank community have cheered this on, demanding draconian regulation. But before rushing to regulate — and potentially choking the economic engine fueling “free” online content and services — policymakers should be asking whether alternatives to command-and-control regulation can adequately address privacy concerns.

We are in the process of penning a major study on this debate, which will challenge those who are calling for regulation to:

(1) Show us the harm or market failure.

(2) Prove to us that no less restrictive alternative to regulation exists.

(3) Explain to us how the benefits of regulation outweigh the costs.

It is that second point that we would like to focus more on in a series of upcoming (and likely ongoing) blog entries. Building on the excellent work of our TLF colleague Ryan Radia, we plan to detail the many “technologies of evasion” (i.e, empowerment or user “self-help” tools) that allow web surfers to better protect their privacy online — and especially to defeat tracking for OBA purposes. These tools and methods form an important part of a layered approach that, in our view, provides an effective alternative to government-mandated regulation. Such an approach would also include user education, self-regulatory schemes like the National Advertising Initiative, and FTC enforcement of privacy policies.

Before one can determine the true necessity for government intervention (and, indeed, its constitutionality), one must understand the availability, sophistication and convenience of the technologies of evasion we will describe. In an important 2001 Cato Institute paper, our TLF colleague Tom Bell argues that web surfers must bear some of the responsibility for protecting themselves online, just as they do with regards to potentially objectionable (i.e., “indecent”) online content:

Digital self-help makes unnecessary state action limiting speech that is indecent or harmful to minors. The same argument applies to state action that would limit speech by commercial entities about Internet users. Digital self-help offers more hope of protecting Internet users’ privacy than it does of effectively filtering out unwanted speech, and the availability of such self-help casts doubt on the constitutionality of legislation restricting speech by commercial entities about Internet users. From the more general point of view of policy, moreover, digital self-help offers a better approach to protecting Internet privacy than does state action.

What Bell means is that the digital “self-help” tools that consumers rely on to protect themselves or their children from objectionable content must always confront the subjective problems associated with defining what is indecent or obscene. Thus, even though Internet filtering tools and other parental controls can generally offer a very effective means of blocking access to objectionable content, at the margins there will always be definitional controversies. By contrast, the privacy self-help tools we will describe are much more likely to provide an effective shield because those consumers who are truly sensitive about their online privacy can make far more definitive choices about allowing or disallowing cookies or certain types of personal information from being collected/tracked for targeted advertising purposes.

Finally, Bell correctly notes that “digital self-help” is more likely to be effective than regulatory solutions for a variety of reasons–not least of which is the fact that truly “bad” actors on the Internet are rarely stopped or even discouraged by regulation from doing bad things online where we are talking about the pure exchange of bits (as opposed to purchases or shipments) because they can generally continue their activities from off-shore. In such cases, technical means are the only way of stopping such activities.

We invite you to share examples of technologies of evasion with us as we go along. And we hope that our TLF colleagues might chime in with entries of their own as they find examples of privacy-enhancing technologies that privacy-conscious web surfers can employ to take privacy in their own hands.

– Adam Thierer & Berin Szoka

Previous post:

Next post: