Some Initial Thoughts on the FTC Internet of Things Report

by on January 28, 2015 · 0 comments

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be considered synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

Previous post:

Next post: