Much of my recent research and writing has been focused on the contrast between “permissionless innovation” (the notion that innovation should generally be allowed by default) versus its antithesis, the “precautionary principle” (the idea that new innovations should be discouraged or even disallowed until their developers can prove that they won’t cause any harms). I have discussed this dichotomy in three recent law review articles, a couple of major agency filings, and several blog posts. Those essays are listed at the end of this post.
In this essay, I want to discuss a recent speech by Federal Trade Commission (FTC) Chairwoman Edith Ramirez and show how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, prompted by the various privacy concerns surrounding “big data” and the “Internet of Things” among other information innovations and digital developments.
First, let me recap the core argument I make in my recent articles and filings. It can be summarized as follows:
- If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown. Hypothetical worst-case scenarios trump all other considerations under this mentality. Social learning and economic opportunities become far less likely under such a policy regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living. (See this essay and this one.)
- Wisdom is born of experience, including experiences involving risk and the possibility of mistakes and accidents. Patience and a general openness to permissionless innovation represent the wise disposition toward new technologies not only because it provides breathing space for future entrepreneurialism, but also because it provides an opportunity to observe both the evolution of societal attitudes toward new technologies and how citizens adapt to them. (See this essay.)
- Not every wise ethical principle, social norm, or industry best practice automatically makes for wise public policy. If we hope to preserve a free and open society, we simply cannot convert every ethical directive or norm — no matter how sensible — into a legal directive or else the scope of human freedom and innovation will need to shrink precipitously. (See this essay.)
- The best solutions to complex social problems are organic and “bottom-up” in nature. User education and empowerment, informal household media rules, social pressure, societal norms, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to “top-down,” command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” nature. (See this essay).
- For the preceding reasons, when it comes to information technology policy, “permissionless innovation” should, as a general rule, trump “precautionary principle” thinking. To the maximum extent possible, the default position toward new forms of technological innovation should be “innovation allowed,” or what Paul Ohm has appropriately labeled the “anti-Precautionary Principle.” (See this essay.)
Again, we are today witnessing the clash of these conflicting worldviews in a fairly vivid way in many current debates about online commercial data collection, “big data,” and the so-called “Internet of Things.” For example, FTC Chairwoman Ramirez recently delivered a speech at the annual Technology Policy Institute Aspen Forum on the topic of “The Privacy Challenges of Big Data: A View from the Lifeguard’s Chair.” Ramirez made several provocative assertions and demands in the speech, but here’s the one “commandment” I really want to focus on. Claiming that “One risk is that the lure of ‘big data’ leads to the indiscriminate collection of personal information,” Chairwoman Ramirez went on to argue:
The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the offchance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value. (emphasis added)
And later in the speech she goes on to argue that “Information that is not collected in the first place can’t be misused” and then suggests a parade of horribles that will befall if such data collection is allowed at all.
The Problem with “Mother, May I”?
So here we have a rather succinct articulation of precautionary principle thinking as applied to modern data collection practices. Chairwoman Ramirez is essentially claiming that — because there are various privacy risks associated with data collection and aggregation — we must consider preemptive and potentially highly restrictive approaches to the initial collection and aggregation of data.
The problem with that logic should be fairly obvious and it was perfectly identified by the great political scientist Aaron Wildavsky in his seminal 1988 book Searching for Safety. Wildavsky warned of the dangers of the “trial without error” mentality — otherwise known as the precautionary principle approach — and he contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that:
The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. (emphasis added)
Let’s apply that lesson to Chairwoman Ramirez’s speech. When she argues that “Information that is not collected in the first place can’t be misused,” there is absolutely no doubt that her statement is true. But it is equally true that information that is not collected at all is information that might have been used to provide us with the next “killer app” or the great gadget or digital service that we cannot currently contemplate but that some innovative entrepreneur out there might be looking to develop.
Likewise, claiming that “old data is of little value” and issuing the commandment that “Thou shall not collect and hold onto personal information unnecessary to an identified purpose” reveals a rather stunning arrogance about the possibility of serendipitous data discovery: Either Chairwoman Ramirez doesn’t think it can happen or she doesn’t care if it does. But the reality is that the cornucopia of innovation information options and opportunities we have at our disposal today was driven in large part by data collection, including personal data collection. And often those innovations were not part of some initial grand design; instead they came about through the discovery of new and interesting things that could be done with data after the fact.
For example, many of the information services and digital technologies that we enjoy and take for granted today — language translation tools, mobile traffic services, digital mapping technologies, spam and fraud detection tools, instant spell-checkers, and so on — came about not necessarily because of some initial grand design but rather through innovative thinking after-the-fact about how preexisting data sets might be used in interesting new ways. As Viktor Mayer-Schonberger and Kenneth Cukier point out in their recent book, Big Data: A Revolution That Will Transform How We Live, Work, and Think, “data’s value needs to be considered in terms of all the possible ways it can be employed in the future, not simply how it is used in the present.” “In the big-data age,” they note, “data is like a magical diamond mine that keeps on giving long after its principle value has been tapped.” (p. 103-4)
In any event, if the new policy in the United States is to follow Chairwoman Ramirez’s pronouncement that “Keeping data on the offchance that it might prove useful is not consistent with privacy best practices,” then much of the information economy as we know it today will need to be shut down. At a minimum, entrepreneurs will need to start hiring a lot more lobbyists who can sit in Washington and petition the FTC or other policymakers for permission to innovate whenever they have an interesting new idea for how to use data in order to offer us a new service that was not initially collected for a previously stated purpose. Again, it’s “Mother, May I” regulation and we had better get used to a lot more of it if we go down the path that Chairwoman Ramirez is charting.
Alternative, Less-Restrictive Remedies
But here’s the biggest flaw in Chairwoman Ramirez’s reasoning: There is no need for preemptive, prophylactic, precautionary approaches when less-restrictive and potentially equally effective remedies exist.
The title of Ramirez’s speech was subtitled “A View from the Lifeguard’s Chair,” implying that her role is oversee online practices to ensure consumers are safe. That’s a noble intention, but based on some of her remarks, one is left wondering if her true intention is to just drain the information oceans instead.
But there are better ways to deal with dangerous digital waters. In my work on both online child safety and commercial data privacy, I have argued that the best answer to these complex social problems is a mix of technological controls, social pressure and, informal rules and norms, and, most importantly, education and digital literacy efforts. And government can play an important role by helping educate and empower citizens to help prepare them for our new media environment.
That was the central finding of a blue-ribbon panel of experts convened in 2002 by the National Research Council of the National Academy of Sciences to study how best to protect children in the new, interactive, “always-on” multimedia world. Under the leadership of former U.S. Attorney General Richard Thornburgh, the group produced an amazing report entitled Youth, Pornography, and the Internet, which outlined a sweeping array of methods and technological controls for dealing with potentially objectionable media content or online dangers. Ultimately, however, the experts used a compelling metaphor to explain why education was the most important tool on which parents and policymakers should rely:
Technology—in the form of fences around pools, pool alarms, and locks—can help protect children from drowning in swimming pools. However, teaching a child to swim—and when to avoid pools—is a far safer approach than relying on locks, fences, and alarms to prevent him or her from drowning. Does this mean that parents should not buy fences, alarms, or locks? Of course not—because they do provide some benefit. But parents cannot rely exclusively on those devices to keep their children safe from drowning, and most parents recognize that a child who knows how to swim is less likely to be harmed than one who does not. Furthermore, teaching a child to swim and to exercise good judgment about bodies of water to avoid has applicability and relevance far beyond swimming pools—as any parent who takes a child to the beach can testify. (p. 224)
Regrettably, as I noted in my old book on online safety, we often fail to teach our children how to swim in the new media waters. Indeed, to extend the metaphor, it is as if we are generally adopting an approach that is more akin to just throwing kids in the deep water and waiting to see what happens. The same is true for digital privacy. We sometimes expect both kids and adults to figure out how to swim in these information currents without a little training first.
To rectify this situation, a serious media literacy and digital citizenship agenda is needed in America. Media literacy programs teach children and adults alike to think critically about media, and to better analyze and understand the messages that media providers are communicating. I went on to argue in my old book that government should push media literacy efforts at every level of the education process. And those efforts should be accompanied by widespread public awareness campaigns to better inform parents about the parental control tools, rating systems, online safety tips, and other media control methods at their disposal.
In the three recent law review articles listed below, I extended this model to privacy and showed how this bottom-up, education and empowerment-based approach is equally applicable to all the debates we are having today about commercial data collection. And I also stressed to vital importance of personal responsibility and corporate responsibility as part of these digital citizenship efforts.
Conclusion
So, in sum, the key question going forward is: Are we going teach people how to swim, or are we going to drain the information oceans based on the fear that people could be harmed by the very existence of some deep data waters?
Chairwoman Ramirez concluded her speech by noting that, “Like the lifeguard at the beach, though, the FTC will remain vigilant to ensure that while innovation pushes forward, consumer privacy is not engulfed by that wave.” As well-intentioned as that sounds, the thrust of her remarks suggest that fear of the water is prompting this particular lifeguard to consider drastic precautionary steps to save us from the potential dangers of those waves. Needless to say, such a mentality and corresponding policy framework would have profound ramifications.
Indeed, let’s be clear about what’s at stake here. This is not about “protecting corporate profits” or Silicon Valley companies. This is about ensuring that individuals as both citizens and consumers continue to enjoy the myriad benefits that accompany an open, innovative information ecosystem. We can find better ways to address the dangers of deep data waters without draining the info-oceans. Let’s teach people how to swim in those waters and how to be responsible data stewards so that we can all continue to enjoy the many benefits of our modern data-driven economy.
_________________________
Additional Reading:
Law Review Articles:
- “The Pursuit of Privacy in a World Where Information Control is Failing” – Harvard Journal of Law & Public Policy
- “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” – Minnesota Journal of Law, Science & Technology
- “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” – George Mason University Law Review
Blog posts:
- “Who Really Believes in ‘Permissionless Innovation’?” – March 4, 2013
- “What Does It Mean to ‘Have a Conversation’ about a New Technology?”
- “Planning for Hypothetical Horribles in Tech Policy Debates,” August 6, 2013
- “On the Line between Technology Ethics vs. Technology Policy,” August 1, 2013
- “Can We Adapt to the Internet of Things?” – June 19, 2013 (IAPP Privacy perspectives blog)
Testimony / Filings:
- Senate Testimony on Privacy, Data Collection & Do Not Track – April 24, 2013
- Comments of the Mercatus Center to the FTC in Privacy & Security Implications of the Internet of Things
- Mercatus filing to FAA on commercial domestic drones