Digital Sensors, Darknets, Hyper-Transparency & the Future of Privacy

by on January 28, 2011 · 3 comments

A headline in the USA Today earlier this week screamed, “Hello, Big Brother: Digital Sensors Are Watching Us.”  It opens with an all too typical techno-panic tone, replete with tales of impending doom:

Odds are you will be monitored today — many times over. Surveillance cameras at airports, subways, banks and other public venues are not the only devices tracking you. Inexpensive, ever-watchful digital sensors are now ubiquitous.

They are in laptop webcams, video-game motion sensors, smartphone cameras, utility meters, passports and employee ID cards. Step out your front door and you could be captured in a high-resolution photograph taken from the air or street by Google or Microsoft, as they update their respective mapping services. Drive down a city thoroughfare, cross a toll bridge, or park at certain shopping malls and your license plate will be recorded and time-stamped.

Several developments have converged to push the monitoring of human activity far beyond what George Orwell imagined. Low-cost digital cameras, motion sensors and biometric readers are proliferating just as the cost of storing digital data is decreasing. The result: the explosion of sensor data collection and storage.

Oh my God! Dust off you copies of the Unabomber Manifesto and run for your shack in the hills!

No, wait, don’t. Let’s instead step back, take a deep breath and think about this. As the article goes on to note, there will certainly be many benefits to our increasing “sensor society.”  Advertising and retail activity will become more personalized and offer consumers more customized good and services.  I wrote about that here at greater length in my essay on “Smart-Sign Technology: Retail Marketing Gets Sophisticated, But Will Regulation Kill It First?”  More importantly, ubiquitous digital sensors and data collection/storage will also increase our knowledge of the world around us exponentially and do wonders for scientific, environmental, and medical research.

But that won’t soothe the fears of those who fear the loss of their privacy and the rise of a surveillance society in which our every move is watched or tracked. So, let’s talk about what those of you who feel that way want to do about it.

The Challenge of Information Control

The USA Today quotes some people I know fairly well and have great respect for (Lee Tien, Chris Wolf, & Ryan Calo) raising various concerns but not really offering any specific recommendations. I suspect that it’s only a matter of time before we hear calls for regulation — even bans — of digital sensor / surveillance technologies.  On the other hand, things might unfold the way they did when RFID chips/tags came on the scene.  There was a lot of hysteria then, but things died down and — unless I missed something — no major restrictions on their use were instituted and RFID is in widespread use today.

But the “creepiness” or intrusiveness factor gets ratcheted up a bit with next-gen digital sensor technology, especially because they have become highly decentralized and dirt cheap. Practically every teenager is walking around with a powerful digital “sensor” or surveillance technology in the pocket today.  It’s called their phone.  Except they rarely use it to make calls.  They do, however, use it to record audio and video of themselves and the world around them and instantaneously share it will the planet. They also use geolocation technologies to pinpoint the movement of themselves and others in real time.

Meanwhile, new translation tools and biometric technologies are becoming widely available to average folk. Those of you who have played with Google Goggles on your smartphone know what I am talking about. Incredibly cool stuff, but you can see where it is heading. In a couple of years, we’ll have biometric buttons on our shirts feeding live streams of our daily movements and interactions into social networking sites and databases. We’ll use them to record our days and play them back later, or perhaps to just instantly scan and recognize faces and places in case we can’t remember them using our noggins. As a result, mountains of intimate data we be created, collected, collated, and cataloged on a daily basis. 

And there isn’t much we can do to stop this. As I noted in my essays on “Privacy as an Information Control Regime: The Challenges Ahead, and “The IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” today’s information control efforts are greatly complicated by problems associated with (1) convergence, (2) scale, (3) volume, and (4) unprecedented individual empowerment / user-generation of content.  Thus, for better or worse, the information genies — porn, hate speech, spam, state secrets, pirated content, personal information, etc. — are out of their bottles and getting them back in will be an enormous challenge.

Darknet & the Decline of Practical Obscurity

In the context of personal privacy, the net result of all of this — to quote Jim Harper’s excellent 2006 book Identity Crisis — is the “decline of practical obscurity.”  “As practical obscurity declines,” Harper notes, “it becomes more likely that large quantities of data center on identified individuals  will be collected and more likely that it will be shared and used. With large collections if data highly correlated to precise identities, he consequences of being identified are changing.” (p. 163)  Harper rightly notes that may not be all bad. Again, there will be many benefits associated with this. But many others — especially those who are privacy fundamentalists and would have privacy trump most other values — won’t want to hear about possible benefits or trade-offs. It’s pretty much all bad from their perspective.

So, let’s get back to what we want to do about all this. Is “creepiness” enough of a harm to call in the code cops to undo progress?  If so, can we roll back the clock or put this particular technology back in the bottle?  I suppose that, with enough effort, we could.  But I can’t help but think about all the “darknet“-related critiques I’ve heard over the past decade about the futility of efforts to protect intellectual property or use DRM to secure IP against widespread dissemination. As I noted in my essay on “Two Paradoxes of Privacy Regulation,” many of these arguments have been set forth by the same people who now tell us they want to try to bottle up information in this context by “property-tizing” personal information.

But if the darknet critique holds for flows of copyrighted information, why would it not also hold for personal information?  Perhaps there is less incentive to push out personal information across the planet as aggressively as intellectual property, but that doesn’t mean there is no incentive to do so.  Many people will do it voluntarily each and every day when they put the most intimate details (and pictures / videos) of their lives online.  And, as they darknet critique informs us, once the information is out, it’s pretty much game over.

This is one reason why I’ve been mildly entertained by what some privacy regulatory advocates have said recently about “Do Not Track” regulation being able to stop or slow the technological arms race in the privacy arena.  “The header-based Do Not Track system appeals because it calls for an armistice in the arms race of online tracking,” says Rainey Reitman of EFF.  And the always provocative regulatory agitator Chris Soghoian argues that “opt out mechanisms… [could] finally free us from this cycle of arms races, in which advertising networks innovate around the latest browser privacy control.”  These guys should know better. There is no way in hell that Do Not Track would slow the technological “arms race” in this arena. If anything, a Do Not Track mandate will speed up that arms race and potentially just shift attention toward the development of Deep Packet Inspection (DPI) technologies or other, more invasive, forms of tracking.

I suppose they would argue that we’ll turn our attention to those technological developments as they happen, but that would make my point. There will be technological and marketplace responses to efforts to freeze current market structures, norms, and technologies in place. Again, for better or worse, progress happens.  It’s just that privacy advocates aren’t particularly fond of the consequences of technological progress in this regard and want to put a stop to it.  But they will fail.

Hyper-Transparency

At this point, some savvy readers might suspect I have fallen under the spell of David Brin and the vision he set forth in his 1997 book, The Transparent Society. There’s some truth to that, at least as it pertains to the empirical side of his argument. For those who forget his provocative thesis, Brin argued that:

While new surveillance and data technologies pose vexing challenges, we may be wise to pause and recall what worked for us so far. Reciprocal accountability — a widely shared power to shine light, even on the mighty — is the unsung marvel of our age, empowering even eccentrics and minorities to enforce their own freedom. Shall we scrap civilization’s best tool — light — in favor of a fad of secrecy?
Across the political spectrum, a “Strong Privacy” movement claims that liberty and personal privacy are best defended by anonymity and encryption, or else by ornate laws restricting what groups or individuals may be allowed to know. This approach may seem appealing, but there are no historical examples of it ever having worked.  Strong Privacy bears a severe burden of proof when they claim that a world of secrets will protect freedom… even privacy… better than what has worked for us so far — general openness.
Indeed, it’s a burden of proof that can sometimes be met! Certainly there are circumstances when/where secrecy is the only recourse… in concealing the location of shelters for battered wives, for instance, or in fiercely defending psychiatric records. These examples stand at one end of a sliding scale whose principal measure is the amount of harm that a piece of information might plausibly do, if released in an unfair manner. At the other end of the scale, new technologies seem to require changes in our definition of privacy. What salad dressing you use may be as widely known as what color sweater you wear on the street… and just as harmlessly boring.
The important thing to remember is that anyone who claims a right to keep something secret is also claiming a right to deny knowledge to others. There is an inherent conflict! Some kind of criterion must be used to adjudicate this tradeoff and most sensible people seem to agree that this criterion should be real or plausible harm… not simply whether or not somebody likes to keep personal data secret.

As a normative matter, I’m not entirely in league with Brin, but I do think he makes a very powerful case for transparency and openness trumping privacy and secrecy. (And isn’t it a delicious irony of information policy debates that the same crowd that is typically hammering on policymakers about the need for greater “openness” and transparency in all other matters suddenly wants to the opposite when our personal information is brought into the discussion?!)

But where I am entirely in agreement with Brin is with his empirical or practical case for understanding and, to some extent, accepting the world around us.  I wouldn’t necessarily label it the snarky “privacy is dead, just get over it,” but I would think it fair to call this philosophy “privacy is changing, and we need to learn how to live with it.”

Thinking about Concrete Harms & Targeted Solutions to Them

To be clear, I’m not against all forms of “privacy” law or regulation.  When it comes to government surveillance, I think we need more limitations on the State and the ability of public officials to access certain types of information, or act upon it. The key point here is that the solution to State surveillance concerns should not be bans on the technology. We instead need to shackle State actors and tightly delimit their power over our lives—such as by tightening up the Electronic Communications Privacy Act, as the Digital Due Process Coalition proposes, and by creating new protections for locational data, as Sen. Wyden has recently proposed.  And we should do so because the State possesses uniquely coercive powers over our lives and our property.

For privately aggregated data, it’s more complicated. I continue to think we can live with most forms of private data collection and aggregation since there are great benefits for society.  Most of the time, companies are just trying to sell us a more relevant product.  It’s hard for me to see the harm in that.  But there will be certain categories of personal information that will eventually need to be carved out of the mix.  I think health and financial information are the two primary categories in this case. It doesn’t mean we should take extreme steps to limit all data flows associated with them, but we will likely need to take some steps.  And most countries, including the U.S., already have targeted laws dealing with those two categories of personal information.  In this sense, I look at privacy regulation in much the same way I look at censorship.  The general default should be that openness and information sharing are permissible. But in some extreme cases — think child pornography — most of us can agree that the harm is quite tangible and significant enough to warrant repression of that information / content.

These are challenging issues and this is fertile ground for further academic investigation.  I think that we are only beginning to explore and understand the mechanics of information control regimes. As we continue that exploration, especially as we look to significantly broaden regulation of personal information flows, here are some questions for scholars to consider and debate:

  • In the context of privacy and personal information, how far should law go to roll back digital progress or try to put the genie back in the bottle?
  • Does the “darknet” theory have ramifications for the privacy debate?
  • Can or should we have similar information control regimes for privacy, content control, defamation, intellectual property, cybersecurity, etc, or should each problem be treated/regulated differently?
  • If, however, we adopt differing regulatory regimes for different classes of information, won’t the most restrictive regime become a model for the others?
  • Finally, instead of attempting to stifle all information flows or block new technologies that facilitate information sharing, are we better off — as Brin suggests — channeling our energy in to increasing transparency across the board so that those who hold information about us are forced to reveal what they have or know?  Of course, that will lead some to suggest — as many privacy advocates do today — that we should be given more control over the uses of that information once it is in the wild.  Again, what I am assuming here is that that is increasingly an exercise in futility.

Previous post:

Next post: