When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed

by on April 29, 2011 · 8 comments

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can’t resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled “(Your) Information Wants to be Free.” Adida’s essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on “Privacy as an Information Control Regime: The Challenges Ahead,” “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” and this recent FTC filing.]

In his essay, Adida observes that:

In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.

Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.

But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath.

I discussed this in relation to the privacy debates in my essays referenced above. I’ve noted how some “cyber-progressives” (or whatever you prefer to call tech thinkers and advocates on the Left) have been practically giddy with delight at the sight of copyright owners scrambling to find methods to protect their content from widespread distribution over distributed digital networks. Just about every information control effort attempted in the copyright arena — whether we are talking about efforts like DRM  & paywalls or even suing end-users — has failed to provide the degree of protection desired. The “darknet” critique remains fairly cogent. It doesn’t mean I’m excusing copyright piracy as a normative matter; it’s just to say that the cyber-progressives were certainly on to something as an empirical matter when they detailed the deficiencies of various IP control efforts.

But here’s the interesting question: Why shouldn’t we believe that the exact same critique applies to privacy and personal information flows? Again, it’s not to say that, as a normative matter, privacy isn’t important. And data security certainly is. It’s just to say that, as an empirical matter, information control in this context is going to be every bit as difficult as information control in the copyright context. Yet, the same crowd of cyber-progressives who were all for information freedom in the copyright context are now hoping to crack down on personal information flows in the name of protecting privacy.

And it is not going to work.

Nor will it work well for those who are looking to crack down on the flow of bits that contain porn or violent content.

Nor will it work well for those “cyber-conservatives” who are looking to crack down on the flow of bits that contain state secrets or online gambling.

Nor will it work well for those who want to curb what they regard as “harassing” speech, “hate speech,” or defamatory comments.

And so on. And so on.

I will be accused of being too much of a technological determinist, but I think there’s a lot of evidence suggesting that at least “soft determinism” is the order of the day. In a brilliant and highly provocative new paper, “Hasta La Vista Privacy, or How Technology Terminated Privacy,”  Konstantinos K. Stylianou of the University of Pennsylvania Law School discusses varieties of technological determinism as it pertains to information control and notes:

In-between the two extremes (technology as the defining factor of change and technology as a mere tangent of change) and in a multitude of combinations falls the so called soft determinism; that is, variations of the combined effect of technology on one hand and human choices and actions on the other. (p. 46)

Unfortunately, Stylianou notes, “The scope of soft determinism is unfortunately so broad that is loses all normative value. Encapsulated in the axiom ‘human beings do make their world, but they are also made by it,’ soft determinism is reduced to the self-evident.”  Nonetheless, he argues, “a compromise can be reached by mixing soft and hard determinism in a blend that reserves for technology the predominant role only in limited cases,” since he believes “there are indeed technologies so disruptive by their very nature they cause a certain change regardless of other factors.” (p. 46) He concludes his essay by noting:

it seems reasonable to infer that the thrust behind technological progress is so powerful that it is almost impossible for traditional legislation to catch up. While designing flexible rules may be of help, it also appears that technology has already advanced to the degree that is is able to bypass or manipulate legislation. As a result, the cat-and-mouse chase game between the law and technology will probably always tip in favor of technology. It may thus be a wise choice for the law to stop underestimating the dynamics of technology, and instead adapt to embrace it. (p. 54)

That pretty much sums up where I’m at on most information control issues and explains why I sound so fatalistic at times, even if I do believe that law can have an impact at the margins. Such “soft determinism” will be hard for some to swallow. Many will simply refuse to accept it, especially when they hear statements like those Stylianou makes in the context of privacy, such as: “the advancement of digital technology is ineluctably bound to have a destructive impact on privacy” (p. 47), or “technology has made it indeed so easy to collect personal data that in many cases they have lost their individual value, and instead function merely as statistical or ancillary data” (p. 51), or “What technological determinism teaches us so far is that people will always react negatively to more intrusive technology, but in the end they will probably succumb.” (p. 54)

One might cynically view this simply as a more eloquent restatement of Scott McNealy’s famous quip: “privacy is dead, get over it.”  While that’s an a bit of overstatement, it’s nonetheless true that privacy is under enormous strain because of modern digital developments (summarized in Exhibit 3 below). But, again, everything is under enormous strain. Perhaps, therefore, we need a reformulation of McNealy’s quip: “Information control is dead, get over it.”

Anyway, going forward, we need a framework to think about information control efforts. I’ve been working with my Mercatus Center colleague Jerry Brito to develop just that in a forthcoming paper (current running title: “The Trouble with Information Control.”)  To begin, we simplify matters by dividing information control efforts into four big buckets, as shown in Exhibit 1 below. (Note: With Jerry Brito’s help, I have reworked these categories since first outlining them here):

Exhibit 1: RATIONALES FOR INFORMATION CONTROL

(1) Censorship / Speech Control

  • politically unpopular speech
  • porn
  • violent content
  • hate speech
  • cyberbullying

(2) Privacy

  • defamation
  • reputation

(3) Copyright & Trademark Protection

(4) Security

  • state secrets
  • national security
  • law enforcement
  • cybersecurity
  • online gambling

Next, we can consider various legal responses to these objects of information control, as detailed in Exhibit 2:

Exhibit 2: LEGAL & REGULATORY RESPONSES / APPROACHES TO INFORMATION CONTROL

  • Intermediary deputization / secondary liability
  • Individual prosecutions / fines
  • Controls on speech / expression
  • Controls on monetary flows
  • Other Regulation
  • Taxation / fines
  • Agency enforcement / adjudication

Finally, we need to consider how efforts to control information today are greatly complicated by problems or phenomena that are unique to the Internet or the Information Age, as outlined in Exhibit 3:

Exhibit 3: INFORMATION CONTROL CONSIDERATIONS / COMPLICATIONS

  • Media & Technological Convergence
  • Decentralized, Distributed Networking
  • Unprecedented Scale of Networked Communications
  • Explosion of the Overall Volume of Information
  • Unprecedented Individual Information Sharing Through User-Generation of Content and Self-Revelation of Data

In this upcoming paper, Jerry and I will provide case studies based on many of the issues outlined in Exhibit 1 and show how the information control methods shown in Exhibit 2 typically fail to slow or restrict information flows because of the factors outlined in Exhibit 3. Assuming we can prove our thesis — that soft determinism is the order of the day and information control efforts of all varieties are increasingly difficult (and often completely futile) — I fully expect that we will make just about everybody unhappy with us!

However, I want to conclude by noting that just because I am somewhat fatalistic or deterministic about the likely failure of most information control proposals or mechanisms, it doesn’t mean I am willing to just throw my hands in the air and say there’s absolutely nothing that can be done to address some of the concerns listed in Exhibit 1.  In my work on how to address online child safety issues, I tried to develop what I call a “3-E Solution” to address these concerns.  In my paper with Jerry, I’m hoping to use this as a framework for how to deal with all information control concerns going forward:

  1. Education: Get more information out about the issue / concern.
  2. Empowerment: Give consumers more and better tools to act on that information.
  3. (Selective) Enforcement: Have law step in at the margins when it’s appropriate and cost-efficient, and only after education and empowerment fail.

Of course, how much stress we place on each component of this toolbox will depend on the issue. I’ve already suggested that the last “E” of enforcement will be largely ineffective, especially when outright prohibition of particular information flows is the objective. But enforcement could be more effective in other contexts, such as holding companies accountable for the promises they make to consumers, by policing industry self-regulatory schemes, or by demanding more transparency / disclosure. Those enforcement practices have helped in the child safety and privacy contexts. In other contexts, the severity of the harm in question may be so severe — ex: child pornography — that we would bypass the education and empowerment steps altogether and go to much greater lengths to make the enforcement option work. Even then, we should keep our expectations in check and avoid a rush to extreme solutions.

There’s much more to be explored here. Stay tuned.

Previous post:

Next post: