Privacy, Security & Government Surveillance

I highly recommend this analysis of the Federal Trade Commission’s (FTC) new “Do Not Track” proposal by Ben Kunz over at Bloomberg Businessweek.  In his essay, Kunz, the director of strategic planning at Mediassociates, a media planning and Internet strategy firm, hits many of the major themes we have developed here at the TLF when critiquing the FTC’s plan and privacy regulation more generally. Namely, we live in a world of trade-offs and regulation can have unintended consequences.  Kunz argues that, “while the [FTC] may have consumers’ best interests at heart… the idea has two huge problems”:

1. It won’t stop online ads. While Do Not Call lists kept telemarketers at bay, you’ll still see tons of banners and videos everywhere online. They’ll simply be less relevant. 2. Do Not Track will send billions of dollars to the big online publishers, hurting the little sites you might find most interesting. The second point is painful. It could really harm you, too, dear consumer, if you read things online other than The New York Times, Bloomberg, or iVillage.com. Why? The “Long Tail” of niche content is going to get crushed. Let’s follow the money. More than $25 billion was spent on U.S. online ads in 2010, according to eMarketer. About $1 billion of this went to behaviorally targeted ads tied closely to user data; nearly $8 billion overall is in some way related to online tracking. That $8 billion has posed horrible problems for publishers of major websites, such as Bloomberg (this column’s host, for which we don’t work, so we’ll be equally critical of it) or The New York Times. Before tracking came along, such publishers were the only means of reaching a known type of audience. Business people read Businessweek.com while moms read O magazine online and iVillage.com. Like the publications of the past century, a given website has always been a proxy for an audience target. Alas for the big publishers, good data on audiences has meant that smart marketers could leave big, expensive sites behind. So in perhaps the biggest revolution of Internet marketing, the more data you can collect about today’s customers, the cheaper online advertising gets.

Continue reading →

[Here’s an oped of mine that recently ran on Reuters.  Readers will recognize many of these themes and arguments since I have developed them here on the TLF many times before.]

Privacy Regulation and the “Free” Internet

by Adam Thierer, Mercatus Center at George Mason University

Would you like to pay $20 a month for Facebook, or a dime every time you did a search on Google or Bing?  That’s potentially what is at stake if the Obama administration and advocates of stepped-up regulation of online advertising get their way.

The Internet feels like the ultimate free lunch.  Once we pay for basic access, a cornucopia of seemingly free services and content is at our fingertips.  But those services don’t just fall to Earth like manna from heaven.  What powers the “free” Internet are data collection and advertising. In essence, the relationship between consumers and online content and service providers isn’t governed by any formal contract, but rather by an unwritten  quid pro quo: tolerate some ads or we’ll be forced to charge you for service.  Most consumers gladly take that deal—even if many of them gripe about annoying or intrusive ads, at times. Continue reading →

Is Watching “Spying”?

by on December 23, 2010 · 4 comments

I was struck by the absurd title of a New York Post story from yesterday: Is Your Restaurant Spying on You? Some restaurants are—shocker—making note of your preferences and your qualities as a customer, for good or bad. That’s “spying”?

Of course, headlines are meant to catch attention. The story illustrates a phenomenon that will continue to proliferate, and that will probably continue to raise hackles, classed as “spying”, “privacy invasion”, “dossier building”, and such. People and businesses are more able to capture information about each other than they were before. (It is a two-way street. We consumers know more about businesses, and businesses know more about us.)

That’s a big change from the recent past. Over the past century or so, people got more mobile and thus less amenable to consistent observation—which means less amenable to being affixed with a reputation. Now information systems are catching up. What kind of person you are—a good tipper, a brusque faux gastronome—that information might precede you to a restaurant. Object to it. Call it what you want. But you might also consider getting used to it, tipping better, and being polite.

None of this is a comment on what our public policies should be. They should neither favor this cultural change nor fight it. People need to understand what happens with information about them, and they should be able to withhold information if they want, though that may be hard for privacy outliers to do.

As a student of information, I find it hard to accept that a restaurant noting the information you’ve made available to it is “spying.”

Advocates of regulation will credit regulators for the fact that major browser providers Microsoft and Mozilla are going after online “tracking.” In forthcoming versions of their browsers, they will provide controls that protect against unwanted monitoring even better than the controls that now exist.

When consumer advocates cluster in Washington, D.C., asking federal agencies to solve consumer issues, of course, any progress on the issues will be credited to the threat of coercion. But experiments like these have no controls.

Decisions about the qualities of goods and services are made out at the leading edge of consumer demand, where producers work to anticipate developing public interests. Meeting demand after it has been realized is a recipe for business failure because competitors getting there before the others win market share and profits. Laggards are losers.

You can tell when regulators push for something that does not match up with consumer demand as perceived in the business sector. The regulators get nowhere. That would be the FTC’s call a decade ago for a suite of regulations requiring “notice, choice, access, and security.” The current push for “tracking” controls does appear to meet up with consumer demand, and, again, the browser providers are working on it years ahead of what any regulation would have required.

I’ve put “tracking” in scare quotes because the open question is just what anyone means by the word. The report linked above notes a comment from Google, provider of the Chrome browser:

“The idea of ‘Do Not Track’ is interesting, but there doesn’t seem to be consensus on what ‘tracking’ really means, nor how new proposals could be implemented in a way that respects people’s current privacy controls,” said the company…

Maybe Google will be the laggard and loser for not moving on “tracking” as fast as its competitors. That’s one approach, while Microsoft and Mozilla will each take a different tack to the problem. The result will be an experiment that does have controls. The browser provider that meets up with consumer interests, in the consumer-friendliest way, wins. Such would not be the case if a federal regulation—yes, one-size-fits-all—determined what “tracking” was and how browsers or others would provide protection against it.

Marketplace competition will do better than any other known method for determining what “tracking” means to consumers and what to do about it. There is no privacy advocate, there is no technologist, no advocacy group, nor academic who knows what to do here.

The one thing I recommend is that do-not-track efforts should control the content of the header and the domains the browser communicates with. Simply putting a “do-not-track” signal in the header would punt the problem back to regulators and the cadre that surrounds them. This group would come up with something that satisfies itself, the regulatory community, but that does not digest and reconcile actual consumers’ competing interests in privacy, convenience, access to content, and so on.

This week saw the release of another major government privacy report, this one from the Department of Commerce.  The report called for expanded oversight and a new Privacy Policy Office within the Commerce Department. [Good summary of the report is here, and make sure to see Braden’s post about it here.]  The Commerce Dept. green paper follows a report from the Federal Trade Commission (FTC) just a few weeks ago. The FTC report also endorsed a new regulatory framework, including a so-called “Do Not Track” mechanism to allow easier consumer opt-outs of online data collection and advertising.

Commenting on the gradual move toward a mandatory opt-in world for online advertising / data collection, Corey Kronengold of Digiday makes an argument that Berin Szoka and I have tried to develop here in the past.  Namely, if government regulation “breaks” the implicit online quid pro quo currently governing online sites and services — i.e., that you get lots of free stuff in exchange for tolerating ads and data collection — then something must give.  In all likelihood, that means paywalls will go up and prices will increase from zero to something higher.  In his essay, “Taking Issue: The Value of Privacy,” Kronengold argues:

The value chain of online publishing is increasingly complex. And most consumers don’t have any interest in understanding the mechanics of targeting, data collection and re-selling, and ad revenue sharing. If continued access to free web content is what consumers are after, this has to change. Not participating in the value exchange is not an option. Yet we continue to struggle to explain. We need to do a better job of explaining the options and the consequences of those choices. When we can more clearly explain the benefits of allowing third party data to be bought and sold, users, and our government, are much more likely to allow us to continue to do so.

Continue reading →

Earlier today the Commerce Department’s Internet Policy Task Force issued its expected privacy report. Commerce waded into shark-filled privacy waters and produced a report that overall is thoughtful, comprehensive and has lots of meat for strengthening the nation’s privacy framework. Of course, we have our quibbles too. On first read, here’s what I like and what concerns me:

Like:

  • “Dynamic policies”. The report appropriately proposes what it calls “dynamic policies.” We agree that technology and information flows are constantly changing, so a privacy policy regulatory framework should not be static, nor should it be proscriptive.
  • Privacy Policy Office. Because it would be located within Commerce, the office would be a vital advocate for online companies doing business overseas. It could help outreach with European regulators and coordinate certification procedures to enable cross-border data flows.
  • Transparency through purpose specification and use limitation (NOT collection limitation and data minimization). The report proposes consumer assurances principles that would require data collectors to specify all the reasons for collecting personal information and then specify limits on the use of that information. This is a flexible approach compared to proscriptive regulations limiting data collection and requiring data minimization.
  • Encourage Global Interoperability. In our comments, NetChoice advocated strongly for international privacy reciprocation, and where appropriate, harmonization.
  • ECPA Review. We like how the report calls for a review of the Electronic Communications Privacy Act (ECPA). The law is outdated and doesn’t do a good job of clarifying the roles of online companies when responding to law enforcement requests.

Concerns: Continue reading →

The Sixth Circuit ruled on Tuesday that criminal investigators must obtain a warrant to seize user data from cloud providers, voiding parts of the notorious Stored Communication Act.  The SCA allowed investigators to demand providers turn over user data under certain circumstances (e.g., data stored more than 180 days) without obtaining a warrant supported by probable cause.

I have a very long piece analyzing the decision, published on CNET this evening.  See “Search Warrants and Online Data:  Getting Real.” (I also wrote extensively about digital search and seizure in “The Laws of Disruption.”)  The opinion is from the erudite and highly-readable Judge Danny Boggs.    The case is notable if for no other reason than its detailed and lurid description of the business model for Enzyte, a supplement that promises to, well, you know what it promises to do…. Continue reading →

At today’s FCC “Generation Mobile” forum — chock-full of online safety experts, company reps, Jane Lynch of the TV show Glee, and even Chairman Genachowski himself — it was the kids that made the show about mobile technology worthwhile. On a panel about generation mobile, here are a few of the statements we heard from high school kids:

  1. “Don’t just take the phone away.”
  2. “When parents snoop too much, it’s a privacy invasion.”
  3. “We’ll listen more if you present us with concrete evidence for behavioral restrictions.”

These are the kinds of arguments tech policy advocates make, only we would have said them in our unique brand of policy speak:

  1. Don’t regulate the technology, regulate bad behavior.
  2. Privacy is important and governments/companies must respect the privacy interests of their citizens/customers.
  3. Policymakers should collect sufficient data and analysis before introducing new legislation

Policy geek speak aside, here are some interesting facts we heard about teen use of mobile technology: Continue reading →

Deep in this Washington Post story on dynamic pricing—prices that change based on what online retailers know or guess about individual customers—come these lines:

[A]s much as retailers try to foil bargain shoppers, consumers do hold the upper hand online. Dynamic pricing is easy to counteract. Search multiple sites – including ones that collect prices from across the Internet as well as the sites themselves. Run searches on more than one browser, including one which you have erased cookies. Leave items in a shopping cart for a few days to gin up discount offers.

That makes the rest of the story, and wafting consumer protection concerns with dynamic pricing, a little humdrum. Indeed, it belies the headline: “How Online Retailers Stay a Step Ahead of Comparison Shoppers.”

Even better advice—certainly the simplest—is: Don’t buy what you can’t afford. That is serious consumer protection.

While I harbor plenty of doubts about the wisdom or practicability of Do Not Track legislation, I have to cop to sharing one element of Nick Carr’s unease with the type of argument we often see Adam and Berin make with respect to behavioral tracking here.  As a practical matter, someone who is reasonably informed about the scope of online monitoring and moderately technically savvy already has an array of tools available to “opt out” of tracking. I keep my browsers updated, reject third party cookies and empty the jar between sessions, block Flash by default, and only allow Javascript from explicitly whitelisted sites. This isn’t a perfect solution, to be sure, but it’s a decent barrier against most of the common tracking mechanisms that interferes minimally with the browsing experience. (Even I am not quite zealous enough to keep Tor on for routine browsing.) Many of us point to these tools as evidence that consumers have the ability to protect their privacy, and argue that education and promotion of PETs is a better way of dealing with online privacy threats. Sometimes this is coupled with the claim that failure to adopt these tools more widely just goes to show that, whatever they might tell pollsters about an abstract desire for privacy, in practice most people don’t actually care enough about it to undergo even mild inconvenience.

That sort of argument seems to me to be very strongly in tension with the claim that some kind of streamlined or legally enforceable “Do Not Track” option will spell doom for free online content as users begin to opt-out en masse. (Presumably, of course, The New York Times can just have a landing page that says “subscribe or enable tracking to view the full article.”) If you think an effective opt-out mechanism, included by default in the major browsers, would prompt such massive defection that behavioral advertising would be significantly undermined as a revenue model, logically you have to believe that there are very large numbers of people who would opt out if it were reasonably simple to do so, but aren’t quite geeky enough to go hunting down browser plug-ins and navigating cookie settings. And this, as I say, makes me a bit uneasy. Because the hidden premise here, it seems, must be that behavioral advertising is so important to supplying this public good of free content that we had better be really glad that the average, casual Web user doesn’t understand how pervasive tracking is or how to enable more private browsing, because if they could do this easily, so many people would make that choice that it would kill the revenue model.  So while, of course, Adam never says anything like “invisible tradeoffs are better than visible ones,” I don’t understand how the argument is supposed to go through without the tacit assumption that if individuals have a sufficiently frictionless mechanism for making the tradeoff themselves, too many people will get it “wrong,” making the relative “invisibility” of tracking (and the complexity of blocking it in all its forms) a kind of lucky feature.

There are, of course, plenty of other reasons for favoring self-help technological solutions to regulatory ones. But as between these two types of arguments, I think you probably do have to pick one or the other.