Privacy Solutions

Recently, a group of Republican senators announced they plan to introduce the COVID-19 Consumer Data Protection Act of 2020 to address privacy concerns related to contact-tracing and other pandemic-related apps. This new bill will reinvigorate many of the ongoing concerns regarding a potential federal data privacy framework.

Even before the bill has been officially introduced, it has faced criticism from some groups for failing to sufficiently protect consumers. But a more regulatory approach that might appear protective on the surface also has consequences. The European Union’s (EU) General Data Protection Regulation (GDPR) has made it more complex to develop compliant contact-tracing apps and to run charitable responses that might need personal information. Ideally, data privacy policy around the specific COVID-19 concerns should have enough certainty to enable innovative responses while preserving civil liberties. Policymakers should approach this policy area in a way that enables consumers to choose which options work best for their own privacy preferences and not dictate a one-size-fits-all set of privacy standards.

A quick review of the current landscape of the data privacy policy debate

Unlike the EU, the United States has taken an approach that only creates privacy regulation for specific types of data. Specific frameworks address those areas that consumers would likely consider the most sensitive and expect increased protection, such as financial information, health information, and children’s information. In general, this approach has allowed new and innovative uses of data to flourish.

Following various scandals and data breaches and the expansive regulatory requirements of the EU’s GDPR, policymakers, advocates, consumers, and tech companies have begun to question if the United States should follow Europe’s lead, or instead create a different federal data protection framework, or even maintain the status quo. In the absence of federal action, states such as California have passed their own data privacy laws. The California Consumer Privacy Act (CCPA) became effective in January (you may remember a flurry of emails notifying you of privacy policy changes) and is set to become enforceable July 1. The lack of a federal framework means, with various state laws, the United States could go from an innovation-enabling hands-off approach to a disruptive patchwork, creating confusion for both consumers and innovators. A patchwork means that some beneficial products might not be available in all states because of differing requirements or that the most restrictive parts of a state’s law might become the de facto rule. To avoid this scenario, a federal framework would provide certainty to innovators creating beneficial uses of data such as contact-tracing apps (and the consumers that use them) while also clarifying the redress and any necessary checks to prevent harm.

Questions of Enforcement in the Data Privacy Debate

One key roadblock in achieving a federal privacy framework whether is the question of how such rules should be enforced. Some of the early criticism of the potential COVID-19 data privacy bill has been about the anticipated lack of additional enforcement.

Often the choices for data privacy enforcement are portrayed as a false dichotomy between the status quo or an aggressive private right of action, with neither side willing to give way. In reality, as I discuss in a new primer, there are a wide range of options for potential enforcement. Policymakers should build on the advantages of the current flexible approach that has allowed American innovation to flourish. This also provides a key opportunity to improve the certainty for both innovators and consumers when it comes to new uses of data. More precautionary and regulatory approaches could increase the cost and discourage innovation by burdening innovative products with the need for pre-approval. Ideally, a policy framework should preserve consumers and innovators’ ability to make a wide range of privacy choices but still provides redress in the case of fraudulent claims or other wrongful action.

There are tradeoffs in all approaches. Current Federal Trade Commission (FTC) enforcement has led to concerns around the use of consent decrees and the need for clarity. A new agency to govern data privacy could be a massive expansion of the administrative state. State attorneys general might interpret and enforce federal privacy law differently if not given clear guidance from the FTC or Congress. A private right of action could deter not only potentially harmful innovation but prevent consumers from receiving beneficial products out of concerns about litigation risks. I discuss each of these options and tradeoffs in more detail in the new primer mentioned earlier.

Policymakers should look to the success of the current approach and modify and increase enforcement to improve that approach, rather than pursue other options that could lead to some of the more pronounced consequences of intervention.

Conclusion

As we are seeing play out during the current crisis, all privacy regulation inevitably comes with tradeoffs. We should be cautious of policies that presume that privacy should always be the preferred value and instead look to address the areas of harm while allowing a wide range of preferences. When it comes to questions of enforcement and other areas of privacy legislation, policymakers should look to preserve the benefits of the American approach that has given rise to a great deal of innovation that could not have been predicted or dictated.

California’s recently enacted digital privacy legislation, the “California Consumer Privacy Act,” may be getting a sequel in the form of an initiative called the “California Privacy Rights and Enforcement Act of 2020.” While the fallout of CCPA has yet to be seen, since the Act does not go into effect until next year and the regulations governing its application have yet to be finalized, CPREA promises to double-down on its approach by creating yet more largely superfluous – and hugely expensive – digital “rights”.

How did we get here? Well, CCPA, the original, was the brainchild of a wealthy real estate investor named Alastair Mactaggart who, inspired by a cocktail party conversation, used California’s initiative process as a cudgel to get the full attention of the legislature in Sacramento. The body was given an ultimatum, negotiate and pass privacy legislation or Mactaggart would place his creation on the ballot. Continue reading →

Last week, I had the honor of being a panelist at the Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path. Continue reading →

There are a growing number of voices raising concerns about privacy rights and data security in the wake of news of data breaches and potential influence. The European Union (EU) recently adopted the heavily restrictive General Data Privacy Rule (GDPR) that favors individual privacy over innovation or the right to speak. While there has been some discussion of potential federal legislation related to data privacy, none of these attempts has truly gained traction beyond existing special protections for vulnerable users (like children) or specific information (like that of healthcare and finances). Some states, notably including California, are attempting to solve this perceived problem of data privacy on their own, but often are creating bigger problems and passing potentially unconstitutional and often poorly drafted solutions.

Continue reading →

Privacy is an essentially contested concept. It evades a clear definition and when it is defined, scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms:

By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.

Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring. Continue reading →

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

Anupam Chander, Director of the California International Law Center and Martin Luther King, Jr. Hall Research Scholar at the UC Davis School of Law, discusses his recent paper with co-author Uyen P. Lee titled The Free Speech Foundations of Cyberlaw. Chander addresses how the first amendment promotes innovation on the Internet; how limitations to free speech vary between the US and Europe; the role of online intermediaries in promoting and protecting the first amendment; the Communications Decency Act; technology, piracy, and copyright protection; and the tension between privacy and free speech.

Download

Related Links

Reason.org has just posted my commentary on the five reasons why Federal Trade Commission’s proposals to regulate the collection and use of consumer information on the Web will do more harm than good.

As I note, the digital economy runs on information. Any regulations that impede the collection and processing of any information will affect its efficiency. Given the overall success of the Web and the popularity of search and social media, there’s every reason to believe that consumers have been able to balance their demand for content, entertainment and information services with the privacy policies these services have.

But there’s more to it than that. Technology simply doesn’t lend itself to the top-down mandates. Notions of privacy are highly subjective. Online, there is an adaptive dynamic constantly at work. Certainly web sites have pushed the boundaries of privacy sometimes. But only when the boundaries are tested do we find out where the consensus lies.

Legislative and regulatory directives pre-empt experimentation. Consumer needs are best addressed when best practices are allowed to bubble up through trial-and-error. When the economic and functional development of European Web media, which labors under the sweeping top-down European Union Privacy Directive, is contrasted with the dynamism of the U.S. Web media sector which has been relatively free of privacy regulation – the difference is profound.

An analysis of the web advertising market undertaken by researchers at the University of Toronto found that after the Privacy Directive was passed, online advertising effectiveness decreased on average by around 65 percent in Europe relative to the rest of the world. Even when the researchers controlled for possible differences in ad responsiveness and between Europeans and Americans, this disparity manifested itself. The authors go on to conclude that these findings will have a “striking impact” on the $8 billion spent each year on digital advertising: namely that European sites will see far less ad revenue than counterparts outside Europe.

Other points I explore in the commentary are:

  • How free services go away and paywalls go up
  • How consumers push back when they perceive that their privacy is being violated
  • How Web advertising lives or dies by the willingness of consumers to participate
  • How greater information availability is a social good

The full commentary can be found here.

 

Do-Not-Track is not inconceivable itself. It’s like the word “inconceivable” in the movie The Princess Bride. I do not think it means what people think it means—how it is meant to work and how it is likely to offer poor results.

Take Mike Swift’s reporting for MercuryNews.com on a study showing that online advertising companies may continue to follow visitors’ Web activity even after those visitors have opted out of tracking.

“The preliminary research has sparked renewed calls from privacy groups and Congress for a ‘Do Not Track’ law to allow people to opt out of tracking, like the Do Not Call list that limits telemarketers,” he writes.

If this is true, it means that people want a Do Not Track law more because they have learned that it would be more difficult to enforce.

That doesn’t make sense … until you look at who Swift interviewed for the article: a Member of Congress who made her name as a privacy regulation hawk and some fiercely committed advocates of regulation. These people were not on the fence before the study, needless to say. (Anne Toth of Yahoo! provides the requisite ounce of balance, but she defends her company and does not address the merits or demerits of a Do-Not-Track law.)

Do-Not-Track is not inconceivable. But the study shows that its advocates are not conceiving the complexities and drawbacks of a regulatory approach rather than individually tailored blocking of unwanted tracking, something any Internet user can do right now using Tracking Protection Lists.

Social widgets, such as the now-ubiquitous Facebook “Like” button and Twitter “Tweet” button, offer users a convenient way to share online content with their friends and followers. These widgets have recently come under scrutiny for their privacy implications. Yesterday, The Wall Street Journal reported that Facebook, Twitter, and Google are informed each time a user visits a webpage that contains one of the respective company’s widgets:

Internet users tap Facebook Inc.’s “Like” and Twitter Inc.’s “Tweet” buttons to share content with friends. But these tools also let their makers collect data about the websites people are visiting. These so-called social widgets, which appear atop stories on news sites or alongside products on retail sites, notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons, according to a study done for The Wall Street Journal.

It wasn’t exactly a secret that social widgets “phone home.” However, the Journal’s story shed new light on how the firms that offer social widgets handle the data they glean regarding user browsing habits. Facebook and Google reportedly store this data for a limited period of time — two weeks and 90 days, respectively — and, importantly, the data isn’t recorded in a way that can be tied back to a user (unless, of course, the user affirmatively decides to “like” a webpage). Twitter reportedly records browsing data as well, but deletes it “quickly.”

Assuming the companies effectively anonymize the data they glean from their social widgets, privacy-conscious users have little reason to worry. I’m not aware of any evidence that social widget data has been misused or breached. However, as Pete Warden reminded us in an informative O’Reilly Radar essay posted earlier this week, anonymizing data is harder than it sounds, and supposedly “anonymous” data sets have been successfully de-anonymized on several occasions. (For more on the de-anonymization of data sets, see Arvind Narayanan and Vitaly Shmatikov’s 2008 research paper on the topic).

Continue reading →