The Problem with Paul Ohm’s Suggestion to Regulate Inferences to Protect Privacy

by on March 28, 2011 · 0 comments

Here’s an interesting SmartPlanet interview with Paul Ohm, associate professor of law at the University of Colorado Law School, in which he discusses his concerns about “reidentification” as it relates to privacy issues.  “Reidentification” and “de-anonymization” fears have been set forth by Ohm and other computer scientists and privacy theorists, who suggest that because the slim possibility exists of some individuals in certain data sets being re-identified even after their data is anonymized, that fear should trump all other considerations and public policy should be adjusted accordingly (specifically, in the direction of stricter privacy regulation / tighter information controls).

I won’t spend any time here on that particular issue since I am still waiting for Ohm and other “reidentification” theorists to address the cogent critique offered up by Jane Yakowitz in an important new study that I discussed here last week. Once they do, I might have more to say on that point. Instead, I just wanted to make some brief comments on one particular passage from the Ohm interview in which he outlines a bold new standard for privacy regulation:

We have 100 years of regulating privacy by focusing on the information a particular person has. But real privacy harm will come not from the information they have but the inferences they can draw from the data they have. No law I have ever seen regulates inferences. So maybe in the future we may regulate inferences in a really different way; it seems strange to say you can have all this data but you can’t take this next step. But I think that’s what the law has to do.

This is a rather astonishing new legal standard and there are two simple reasons why, as Ohm suggests, “no law… regulates inferences” and why, in my opinion, no law should.  First, every day in countless ways, other people (including many businesses) make inferences about us to satisfy a variety of needs. Consider a few examples based on my own personal experiences:

  • Example 1: Your local butcher may deduce from past purchases which types of meat you like and suggest new choices or cuts that are to your liking. This happened just this past weekend for me when a butcher at my local Balducci’s grocer recommended I try a terrific cut of steak after years of watching what else I bought there. And because I am such a regular shopper at Balducci’s, I also get special coupons and discounts offered to me all the time based on inferences drawn from past purchases. (I have a very similar experience at a local beer and wine store).
  • Example 2: Your mobile phone provider may draw inferences from past usage patterns to offer you a more sensible text or data plan. This happened to me last year when Verizon Wireless cold-called me and set up a much better plan for me.
  • Example 3: Your car or home insurance agent may use data about your past behavior to adjust premiums or offer better plans. When I was teenage punk, my family’s insurance company properly inferred that I was a bad risk to them (and others on the road!) because of multiple speeding tickets. I paid higher premiums as a result all the way through my 20s. But, as I aged and got fewer tickets, they inferred I was a better bet and gave me a lower premium.

I could go on and cite a litany of other examples, but you get the point: Personal information and inferences based upon that information are a natural part of any society and economy.  As my local butcher example illustrates, inferences have always been part of our economy, but such inferences drive an increasing portion of our Information Age economy these days. Thus, practically speaking, it would be quite difficult to devise a clear legal standard that specified what sort of inferences were allowed versus those that would be regarded as verboten.

But there’s a far more profound problem with Ohm’s suggestion that “in the future we may regulate inferences in a really different way.”  Simply stated, at least here in the United States, it could conflict rather radically with our strong First Amendment traditions. Eugene Volokh of UCLA law school summarized this general problem for much of privacy law in his seminal 2000 law review article, “Freedom of Speech, Information Privacy, and the Troubling Implications of a Right to Stop People from Speaking About You.” As he observed there:

The difficulty is that the right to information privacy — the right to control other people’s communication of personally identifiable information about you — is a right to have the government stop people from speaking about you. And the First Amendment (which is already our basic code of “fair information practices”) generally bars the government from “control[ling the communication] of information,” either by direct regulation or through the authorization of private lawsuits.

Now, I understand that there are times when the First Amendment will need to give way to accommodate certain privacy concerns, although my list would be a short one (mostly extremely sensitive forms of personal information). But the problem with Ohm’s paradigm of regulating inferences is that is puts privacy regulation on an epic collision course with the First Amendment since it would require the repression of large amounts of inferential data. This could have a profound chilling effect on speech, journalism, transparency efforts, and much more.  For consumers it could mean fewer choices and higher prices.  As noted above, using data to draw inferences is what facilitates a huge array of offers and special deals in our capitalist economy.  Those offers and deals would dry up if those making them were suddenly denied the right to collect information about us and draw inferences from them.

I can imagine one response to my argument that goes something like this: “Well, we’ll just have to separate ‘good’ inferences from ‘bad’ inferences and regulate accordingly!”  Again, I suppose we can find a couple of buckets where special consideration — even rules — are needed, such as some health and financial information categories.  But we already have laws on the books to deal with those issues. What Ohm is suggesting is that something more is needed, and by making inferences the linchpin of his new paradigm it raises serious issues about just how far the law can and should go to bottle up information and restrict human observation.

__________

Additional Reading:

 

Previous post:

Next post: