Privacy Solutions – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 07 May 2020 13:25:07 +0000 en-US hourly 1 6772528 The Continuing Data Privacy Debates and the Question of Enforcement https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/ https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/#comments Thu, 07 May 2020 13:25:07 +0000 https://techliberation.com/?p=76713

Recently, a group of Republican senators announced they plan to introduce the COVID-19 Consumer Data Protection Act of 2020 to address privacy concerns related to contact-tracing and other pandemic-related apps. This new bill will reinvigorate many of the ongoing concerns regarding a potential federal data privacy framework.

Even before the bill has been officially introduced, it has faced criticism from some groups for failing to sufficiently protect consumers. But a more regulatory approach that might appear protective on the surface also has consequences. The European Union’s (EU) General Data Protection Regulation (GDPR) has made it more complex to develop compliant contact-tracing apps and to run charitable responses that might need personal information. Ideally, data privacy policy around the specific COVID-19 concerns should have enough certainty to enable innovative responses while preserving civil liberties. Policymakers should approach this policy area in a way that enables consumers to choose which options work best for their own privacy preferences and not dictate a one-size-fits-all set of privacy standards.

A quick review of the current landscape of the data privacy policy debate

Unlike the EU, the United States has taken an approach that only creates privacy regulation for specific types of data. Specific frameworks address those areas that consumers would likely consider the most sensitive and expect increased protection, such as financial information, health information, and children’s information. In general, this approach has allowed new and innovative uses of data to flourish.

Following various scandals and data breaches and the expansive regulatory requirements of the EU’s GDPR, policymakers, advocates, consumers, and tech companies have begun to question if the United States should follow Europe’s lead, or instead create a different federal data protection framework, or even maintain the status quo. In the absence of federal action, states such as California have passed their own data privacy laws. The California Consumer Privacy Act (CCPA) became effective in January (you may remember a flurry of emails notifying you of privacy policy changes) and is set to become enforceable July 1. The lack of a federal framework means, with various state laws, the United States could go from an innovation-enabling hands-off approach to a disruptive patchwork, creating confusion for both consumers and innovators. A patchwork means that some beneficial products might not be available in all states because of differing requirements or that the most restrictive parts of a state’s law might become the de facto rule. To avoid this scenario, a federal framework would provide certainty to innovators creating beneficial uses of data such as contact-tracing apps (and the consumers that use them) while also clarifying the redress and any necessary checks to prevent harm.

Questions of Enforcement in the Data Privacy Debate

One key roadblock in achieving a federal privacy framework whether is the question of how such rules should be enforced. Some of the early criticism of the potential COVID-19 data privacy bill has been about the anticipated lack of additional enforcement.

Often the choices for data privacy enforcement are portrayed as a false dichotomy between the status quo or an aggressive private right of action, with neither side willing to give way. In reality, as I discuss in a new primer, there are a wide range of options for potential enforcement. Policymakers should build on the advantages of the current flexible approach that has allowed American innovation to flourish. This also provides a key opportunity to improve the certainty for both innovators and consumers when it comes to new uses of data. More precautionary and regulatory approaches could increase the cost and discourage innovation by burdening innovative products with the need for pre-approval. Ideally, a policy framework should preserve consumers and innovators’ ability to make a wide range of privacy choices but still provides redress in the case of fraudulent claims or other wrongful action.

There are tradeoffs in all approaches. Current Federal Trade Commission (FTC) enforcement has led to concerns around the use of consent decrees and the need for clarity. A new agency to govern data privacy could be a massive expansion of the administrative state. State attorneys general might interpret and enforce federal privacy law differently if not given clear guidance from the FTC or Congress. A private right of action could deter not only potentially harmful innovation but prevent consumers from receiving beneficial products out of concerns about litigation risks. I discuss each of these options and tradeoffs in more detail in the new primer mentioned earlier.

Policymakers should look to the success of the current approach and modify and increase enforcement to improve that approach, rather than pursue other options that could lead to some of the more pronounced consequences of intervention.

Conclusion

As we are seeing play out during the current crisis, all privacy regulation inevitably comes with tradeoffs. We should be cautious of policies that presume that privacy should always be the preferred value and instead look to address the areas of harm while allowing a wide range of preferences. When it comes to questions of enforcement and other areas of privacy legislation, policymakers should look to preserve the benefits of the American approach that has given rise to a great deal of innovation that could not have been predicted or dictated.

]]>
https://techliberation.com/2020/05/07/the-continuing-data-privacy-debates-and-the-question-of-enforcement/feed/ 2 76713
Explaining the California Privacy Rights and Enforcement Act of 2020 https://techliberation.com/2019/10/02/explaining-the-california-privacy-rights-and-enforcement-act-of-2020/ https://techliberation.com/2019/10/02/explaining-the-california-privacy-rights-and-enforcement-act-of-2020/#comments Wed, 02 Oct 2019 16:41:26 +0000 https://techliberation.com/?p=76610

California’s recently enacted digital privacy legislation, the “California Consumer Privacy Act,” may be getting a sequel in the form of an initiative called the “California Privacy Rights and Enforcement Act of 2020.” While the fallout of CCPA has yet to be seen, since the Act does not go into effect until next year and the regulations governing its application have yet to be finalized, CPREA promises to double-down on its approach by creating yet more largely superfluous – and hugely expensive – digital “rights”.

How did we get here? Well, CCPA, the original, was the brainchild of a wealthy real estate investor named Alastair Mactaggart who, inspired by a cocktail party conversation, used California’s initiative process as a cudgel to get the full attention of the legislature in Sacramento. The body was given an ultimatum, negotiate and pass privacy legislation or Mactaggart would place his creation on the ballot.

Instead of running the risk of complicating a 2018 midterm ballot in which Democrats were slated to make huge gains, the Democratic super-majorities in Sacramento chose to pass comprehensive privacy legislation in a matter of days, thereby utterly transforming the way in which digital commerce occurs in the Golden State. Unsurprisingly, the result of doing so was that California became subject to a technically unworkable mess of regulation that necessitated an entire year of subsequent legislative work to it clean-up.

Now, in the wake of that saga, and in spite of a largely successful campaign in the state capital, Mactaggart has grown weary of the legislative process and crafted another initiative to expand and refine the vision of privacy that he would like to impose on America’s most populous state. Only, this time, it appears that he has no intention of working through the legislative process. This time, Mactaggart is going to be a one-man policy decider.

As released, the initiative is equal parts privacy extremism and cynical-politics. Substantively, some will find elements to applaud in the CPREA, between prohibitions on the use of behavioral advertising and reputational risk assessment (all of which are deserving of their own critiques), but the operational structure of the CPREA is nothing short of disastrous. Here are some of the worst bits:

  • Amendments (Section 24) – this section would effectively prevent California from changing its approach to privacy without another initiative, and may even prevent the sort of subsequent legislative clean-up that was necessary to make CCPA at all workable in the first place. A straightforward lesson in exactly what happens when such provisions are passed is available in the form of 1988’s Proposition 103, which has a similar provision that has effectively prevented innovation in California’s insurance market. Wonder why property insurance premiums are skyrocketing in the wake of the state’s fires and why there has been no appreciable development in the auto insurance sector? Look no further than this clause.

  • California Privacy Protection Agency (Section 23) – to enforce the Act, the CPREA creates a new government agency with the power to audit firm’s approaches to security and to fine them, in the amount of $2,500 per/unwitting-transgression, should they be found in violation. While pointless (why have an Attorney General anyway?), that’s not entirely  unusual. What is problematic is that the new agency’s entire existence would be funded directly by fines instead of the general fund, thereby creating an incentive to use broadly defined powers to search for violations to sustain its very existence. What’s more, the suggested statute of limitations in the Act is long, the right to cure is curtailed, and the agency is directed to fund – annually – consumer groups to “promote and protect consumer privacy”. All of this represents a devil’s cocktail of bad incentives for regulatory overreach.

  • Duties of Businesses that Collect Personal Information (Section 4) – new business-side duties in the CPREA will lead to compliance headaches without achieving clear benefits for consumers. For instance, the Act includes an obligation to maintain “reasonable security,” a standard without definition, but readily enforceable by a fine-inclined agency. Similarly troubling, the definition of “personal information” included in the Act likely encompasses a person’s likeness. Which, in consort with the Act’s other requirements, means that when a Californian walks into a brick-and-mortar retailer using security cameras, the Act would require firms to provide them with notice. In effect, this requirement will function as an enforcement trap. The only good to come of it will be the resulting boom in the state’s sign making industry as notices proliferate in a manner that makes Proposition 65’s utterly pointless chemical warnings appear reasonable.

Fortunately, there is time yet for the CPREA to be fought off. Californians, and industry within the state, could see to the direct electoral defeat of CPREA and/or the passage of another initiative designed to more directly remedy consumer harms. Doing so will require not only clear communication about the costs of CCPA and CPREA alike, but also a recognition that voters do want to see something, anything, done related to privacy. Give them a moderate alternative and a reason to choose it, and Mactaggart’s status as de facto state privacy administrator may come to an end.

]]>
https://techliberation.com/2019/10/02/explaining-the-california-privacy-rights-and-enforcement-act-of-2020/feed/ 2 76610
Should the US Adopt the GDPR? https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/ https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/#comments Mon, 01 Oct 2018 16:50:16 +0000 https://techliberation.com/?p=76389

Last week, I had the honor of being a panelist at the  Information Technology and Innovation Foundation’s event on the future of privacy regulation. The debate question was simple enough: Should the US copy the EU’s new privacy law?

When we started planning the event, California’s Consumer Privacy Act (CCPA) wasn’t a done deal. But now that it has passed and presents a deadline of 2020 for implementation, the terms of the privacy conversation have changed. Next year, 2019, Congress will have the opportunity to pass a law that could supersede the CCPA and some are looking to the EU’s General Data Protection Regulation (GDPR) for guidance. Here are some reasons for not taking that path.

GDPR imposes three kinds of costs on firms. First, the regulation forces firms to retool data processes to realign with the new demands. This is generally one time fixed cost that raises the cost of all information using entities. Second, the regime adds risk compliance costs, causing companies to staff up to ensure compliance. Finally, the law will change the dynamics of the industry, as companies adapt to the new requirements.

Right now, the retooling costs and the risk compliance costs are going hand in hand, so it is difficult to suss out the costs of each. Still, they are substantial. A McDermott-Ponemon survey on GDPR preparedness found that almost two-thirds of all companies say the regulation will “significantly change” their informational workflows. For the just over 50 percent of companies expecting to be ready for the changes, the average budget for getting to compliance tops $13 million, by this estimate. Among all the new requirements, this survey found that companies were struggling with the data-breach notification the most. The inability to comply with the notification requirement was cited by 68 percent of companies as posing the greatest risk because of the size of levied fines.

The International Association of Privacy Professionals (IAPP) estimated the regulation will cost Fortune 500 companies around $7.8 billion to get up to speed with the law. And these won’t be one time costs since, “Global 500 companies will be hiring on average five full-time privacy employees and filling five other roles with staff members handling compliance rules.” A PwC survey on the rule change found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.

It might take some time to truly understand the impact of GDPR, but the law will surely change the dynamics of countless industries. For example, when the EU adopted the e-Privacy Directive in 2002, Goldfarb and Tucker found that advertising became far less effective. The impact seems to have reverberated throughout the ecosystem as venture capital investment in online news, online advertising, and cloud computing dropped by between 58 to 75 percent . Information restrictions shift consumer choices. In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012, which was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters. Overall the law lead to a 3.5 percent decrease in lending and reduced aggregate welfare.  

As the Chilean example suggests, some might benefit from a GDPR-like privacy regime. But as Daniel Castro, my co-panelist pointed out, strong privacy laws haven’t done much to sway public opinion. As he wrote with Alan McQuinn ,

The biannual Eurobarometer survey, which interviews 100 individuals from each EU country on a variety of topics, has been tracking European trust in the Internet since 2009. Interestingly, European trust in the Internet remained flat from 2009 through 2017, despite the European Union strengthening its ePrivacy regulations in 2009 (implementation of which occurred over the subsequent few years) and significantly changing its privacy rules, such as the court decision that established the right to be forgotten in 2014. Similarly, European trust in social networks, which the Eurobarometer started measuring in 2014, has also remained flat, albeit low

In other words, it doesn’t seem as though strong regulations have done anything to make people feel as though they are getting a better deal with Internet companies.   

One of my top concerns with the GDPR that wasn’t really discussed relates to the consent requirement in the law. Now, people must affirmatively say that data processors can use their data. As I explained at the American Action Forum ,

Affirmative consent is also known as an opt-in privacy regime. Opt-in is frequently described as giving consumers more privacy protection, but opt-out regimes give an individual the same option to exit data processing without the added burdens. Indeed, most of the large companies already provide a method of opting out of certain data processing and collection. Setting the default by regulation simply biases consumer choices in a particular direction.

Overall, I think I think there was general agreement among the panelists that the US should not adopt the GDPR. But, both Amie Stepanovich of Access Now and Justin Brookman of Consumer’s Union were generally in favor of implementing a couple of the fundamental elements of the GDPR, assuming they were adopted to the US legal system. Indeed, Access Now released a paper on exactly this topic. 

The big question is whether the GDPR or something similar is a set of optimal rules. For countless reasons, I’m skeptical they will really improve consumer experience without imposing substantial costs. 

For more on this topic, check out:

]]>
https://techliberation.com/2018/10/01/should-the-us-adopt-the-gdpr/feed/ 2 76389
The Problem of Patchwork Privacy https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/ https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/#respond Wed, 15 Aug 2018 15:43:18 +0000 https://techliberation.com/?p=76345

There are a growing number of voices raising concerns about privacy rights and data security in the wake of news of data breaches and potential influence. The European Union (EU) recently adopted the heavily restrictive General Data Privacy Rule (GDPR) that favors individual privacy over innovation or the right to speak. While there has been some discussion of potential federal legislation related to data privacy, none of these attempts has truly gained traction beyond existing special protections for vulnerable users (like children) or specific information (like that of healthcare and finances). Some states, notably including California, are attempting to solve this perceived problem of data privacy on their own, but often are creating bigger problems and passing potentially unconstitutional and often poorly drafted solutions.

All states have at least minimal data breach laws and the quality of such laws both in effectiveness and impact on innovation varies. Normally states work as “laboratories of democracy” and are able to test out different regulatory schemes for new technologies with less demosclerosis than the federal process. Similarly, they are better able to account for different preferences in tradeoffs, and in some cases, they are more able to remove barriers to entry by reforming existing areas of law like licensure or products liability to accommodate a new technology. In areas like autonomous vehicles, telemedicine, and drone policy states are often leading the way to embrace these new technologies. However, a new trend in some states to formally regulate the Internet through laws aimed at data privacy or net neutrality to achieve what they perceive as failures of the federal government to act ignores the potential damage to the permissionless federal policy that made the Internet what it is today.

California has passed the California Consumer Privacy Act (CCPA) and other states are likely to follow suit. Unfortunately, these type of statutes are likely to impact innovation in a misguided attempt to correct issues with data privacy. However, these statutes could reach far beyond state borders and illustrate the potential risks of a fifty-state privacy patchwork.

These laws will likely lead to a problem in identifying what entities are covered by the privacy legislation. California’s recent CCPA defines those who are required to comply so ambiguously that a reasonable interpretation would imply the law applies so long as a single user is a resident of California whether they are accessing the website from California or not and no matter if the website purposefully avails itself of California or not.

State laws also unintentionally make it more difficult for small, local companies to compete with Internet giants. Large companies like Google and Facebook can afford the cost of additional compliance but it is more difficult for smaller and mid-size companies to cover such costs. As a result, if they are able to comply they often are more limited in their ability to fund future innovation as they instead invest resources in compliance. In a world of state based privacy laws, it’s inevitable that some would impose contradictory standards and as a result might actually make it worse rather than better as companies pick and choose which states to comply with. What is already playing out in Europe where small and mid-size companies are choosing to exit the market rather spend the cost in complying with new restrictions could play out for states with more restrictive data requirements. And it’s not just fledging startups that have difficulty, the L.A. Times and Chicago Tribune have been unavailable to Europeans since GDPR became effective as they had not completed compliance by the May deadline. In some cases companies have founded it easier to block or exclude effected users than to comply with onerous data restrictions.

In some cases, states making exceptions for companies below a certain number of user also may discourage investment at a certain point. For example the CCPA kicks in at 50,000 users. As a result there is a large marginal costs for gaining 50,001 st user as compliance with the standards are immediately required. This might lead to caps on certain newer platforms or encourage innovators to look for loopholes to avoid the high cost of compliance early on.

But even if states were able to create a sort of interstate compact that created an effectively uniform state level set of privacy laws, it would still be an inappropriate use of federalism for the state to govern data privacy due to its de facto impact on interstate commerce and the First Amendment.

The Internet by its very nature transcends states borders and any state laws aimed at impacting privacy are likely to have national and global impact. This is not what is intended by federalism and not just the case for states like California with a significant amount of tech companies. If there are 50 different state laws than new online intermediaries will have  develop 50 different compliance policies or the most restrictive state will become the de facto standard for everyone left in the industry. As Jeff Kosseff points out, a world of 50 variations of the same privacy law based on users would require out-of-state content creators would likely require significant changes to their existing systems and place an undue burden on content creators and users.

Additionally, there are legitimate concerns about the First Amendment rights to share information that may be in conflict with the way privacy rights are enforced under proposed laws. Requiring otherwise lawful content to be removed silences the speaker. For example, if a friend posts a picture from a party that includes you and you ask all your data be removed is that data yours or your friends. To remove the data would silence a speaker and value one individual’s right to privacy over another’s right to speak. In some cases it seems such tradeoffs could be reasonable such as speech that is not just merely offensive but causes clear harm to the person it is about such as revenge porn, but in many cases it is far less clear. Unfortunately when faced with the crippling potential sanctions of such laws, many companies take a remove first question second approach as has been seen with copyright under the Digital Millennium Copyright Act (DMCA).

While there is a growing voice for data privacy, there seems to be little willingness on the part of consumers or regulators to make such tradeoffs. The so called “privacy paradox” where people do not undertake the necessary actions to match with their stated desire for increased data privacy and many willingly admit they prefer the convenience they receive in exchange for their data. If action on data privacy is necessary, it should occur at a federal level to avoid the patchwork problems that would result from inconsistent state laws. Any law must be narrowly tailored to respect the First Amendment rights of both users and platforms. We also must be aware of the tradeoffs that we are making between innovation and privacy when we see calls for a US GDPR. At the same time we should be concerned that as a result of the heavy burden of compliance with GDPR, a more regulated Internet where only those who can afford to comply survive may replace the permissionless start-up American driven version.

While federal preemption may be needed to address a patchwork of state privacy laws, we should be cautious and seek to avoid the mistakes of GDPR type privacy laws that place a value on individual privacy above innovation and knowledge sharing. Simple steps in providing more transparent information and requirements for notification are more likely to allow individuals to make the privacy choices that best fit their needs.

A privacy patchwork of state based “solutions” is likely to create more problems than it solves. The real solutions to our current dilemmas will come from conversations about how we balance the rewards of innovation with individual preferences for privacy.

]]>
https://techliberation.com/2018/08/15/the-problem-of-patchwork-privacy/feed/ 0 76345
How Should Privacy Be Defined? A Roadmap https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/ https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/#comments Mon, 06 Aug 2018 12:00:45 +0000 https://techliberation.com/?p=76335

Privacy is an essentially contested concept . It evades a clear definition and when it is defined , scholars do so inconsistently. So, what are we to do now with this fractured term? Ryan Hagemann suggests a bottom up approach. Instead of beginning from definitions, we should be building a folksonomy of privacy harms :

By recognizing those areas in which we have an interest in privacy, we can better formalize an understanding of when and how it should be prioritized in relation to other values. By differentiating the harms that can materialize when it is violated by government as opposed to private actors, we can more appropriately understand the costs and benefits in different situations.

Hagemann aims to route around definitional problems by exploring the spaces where our interests intersect with the concept of privacy, in our relations to government, to private firms, and to other people. It is a subtle but important shift in outlook that is worth exploring.

Hagemann’s colleague Will Wilkinson laid out the benefits of this kind of philosophical exercise, which comes to me via Paul Crider . Wilkinson traces it back to very beginnings of liberal thought, which takes a bit to wind up:

Thomas Reid, the Scottish Enlightenment philosopher, pointed out that there are two ways to construct an account of what it means to really know something, rather than just believing it to be true. The first way is to develop an abstract theory of knowledge—a general criterion that separates the wheat of knowledge from the chaff of mere opinion—and then see which of our opinions qualify as true knowledge. Reid noted that this method tends to lead to skepticism, because it’s hard, if not impossible, to definitively show that any of our opinions check off all the boxes these sort of general criteria tend to set out.

That’s why Descartes ends up in a pickle and Hume leaves us in a haze of uncertainty. It’s all a big mistake, Reid said, because the belief that I have hands, for example, is on much firmer ground than any abstract notions about the nature of true knowledge that I might dream up. If my theory implies that I don’t really know that I have hands, that’s a reason to reject the theory, not a reason to be skeptical about the existence of my appendages.

According to Reid, a better way to come up with a theory of knowledge is to make a list of the things we’re very sure that we really know. Then, we see if we can devise a coherent theory that explains how we know them.

The 20th century philosopher Roderick Chisholm called these two ways of theorizing about knowledge “methodism”—start with a general theory, apply it, and see what, if anything, counts as knowledge according to the theory—and “particularism”—start with an inventory of things that we’re sure we know and then build a theory of knowledge on top of it.

Hagemann is right to build privacy on the particularism of Wilkinson, Reid and Chisholm. Given the changing nature of technology, we should take a regular “inventory of things that we’re sure we know” about privacy and then build theories on top of it.

Indeed, privacy scholarship finds its genesis in this method. While many have gotten hung up on the rights talk in the “Right to Privacy”, Warren and Brandeis actually aim “to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is.” The article looks to previous law to construct a principle for “recent inventions and business methods.” This is particularism applied to privacy.

Only a handful of court cases that are actually reviewed in the article, the most important of which is Marian Manola v. Stevens & Myers . Marian Manola was a classically trained comic opera prima donna that had a string of altercations with her company where Stevens was the manager. About a year before the case, the New York Times carried a story describing a dispute between Manola and another actor in the McCaull Opera Company. She refused to go on stage after the actor pushed her on stage and Benjamin Stevens, apparently “ignored her until she returned to her duty.” About a year later, Stevens set up the photographer Myers in a box, as a stunt to boost sales. Manola sued the both of them. Today, the case would be cited in the right to publicity literature.

Still, Warren and Brandeis were trying to survey the land of privacy harms and then build a principle on top of it.

Be it either particularism or methodism, these ways of constructing knowledge frame the moral ground, creating a field where privacy advocates and privacy scholars can converse. What unites these two groups, then, is their common rhetoric about the contours of  privacy harms. And so, what constitutes a harm is still the central question in privacy policy.

]]>
https://techliberation.com/2018/08/06/how-should-privacy-be-defined-a-roadmap/feed/ 1 76335
Adam Thierer on Permissionless Innovation https://techliberation.com/2014/05/13/thierer/ https://techliberation.com/2014/05/13/thierer/#respond Tue, 13 May 2014 10:00:30 +0000 http://techliberation.com/?p=74547

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

]]>
https://techliberation.com/2014/05/13/thierer/feed/ 0 74547
Anupam Chander on free speech and cyberlaw https://techliberation.com/2013/11/12/anupam-chander-on-free-speech-and-cyberlaw/ https://techliberation.com/2013/11/12/anupam-chander-on-free-speech-and-cyberlaw/#respond Tue, 12 Nov 2013 11:00:03 +0000 http://techliberation.com/?p=73785

Anupam Chander, Director of the California International Law Center and Martin Luther King, Jr. Hall Research Scholar at the UC Davis School of Law, discusses his recent paper with co-author Uyen P. Lee titled The Free Speech Foundations of Cyberlaw. Chander addresses how the first amendment promotes innovation on the Internet; how limitations to free speech vary between the US and Europe; the role of online intermediaries in promoting and protecting the first amendment; the Communications Decency Act; technology, piracy, and copyright protection; and the tension between privacy and free speech.

Download

Related Links

]]>
https://techliberation.com/2013/11/12/anupam-chander-on-free-speech-and-cyberlaw/feed/ 0 73785
The Risks of Misapplied Privacy Regulation https://techliberation.com/2012/04/03/the-risks-of-misapplied-privacy-regulation/ https://techliberation.com/2012/04/03/the-risks-of-misapplied-privacy-regulation/#respond Tue, 03 Apr 2012 19:35:34 +0000 http://techliberation.com/?p=40682

Reason.org has just posted my commentary on the five reasons why Federal Trade Commission’s proposals to regulate the collection and use of consumer information on the Web will do more harm than good.

As I note, the digital economy runs on information. Any regulations that impede the collection and processing of any information will affect its efficiency. Given the overall success of the Web and the popularity of search and social media, there’s every reason to believe that consumers have been able to balance their demand for content, entertainment and information services with the privacy policies these services have.

But there’s more to it than that. Technology simply doesn’t lend itself to the top-down mandates. Notions of privacy are highly subjective. Online, there is an adaptive dynamic constantly at work. Certainly web sites have pushed the boundaries of privacy sometimes. But only when the boundaries are tested do we find out where the consensus lies.

Legislative and regulatory directives pre-empt experimentation. Consumer needs are best addressed when best practices are allowed to bubble up through trial-and-error. When the economic and functional development of European Web media, which labors under the sweeping top-down European Union Privacy Directive, is contrasted with the dynamism of the U.S. Web media sector which has been relatively free of privacy regulation – the difference is profound.

An analysis of the web advertising market undertaken by researchers at the University of Toronto found that after the Privacy Directive was passed, online advertising effectiveness decreased on average by around 65 percent in Europe relative to the rest of the world. Even when the researchers controlled for possible differences in ad responsiveness and between Europeans and Americans, this disparity manifested itself. The authors go on to conclude that these findings will have a “striking impact” on the $8 billion spent each year on digital advertising: namely that European sites will see far less ad revenue than counterparts outside Europe.

Other points I explore in the commentary are:

  • How free services go away and paywalls go up
  • How consumers push back when they perceive that their privacy is being violated
  • How Web advertising lives or dies by the willingness of consumers to participate
  • How greater information availability is a social good

The full commentary can be found here.

 

]]>
https://techliberation.com/2012/04/03/the-risks-of-misapplied-privacy-regulation/feed/ 0 40682
How Do-Not-Track is Like Inconceivable https://techliberation.com/2011/07/25/how-do-not-track-is-like-inconceivable/ https://techliberation.com/2011/07/25/how-do-not-track-is-like-inconceivable/#respond Mon, 25 Jul 2011 17:08:44 +0000 http://techliberation.com/?p=37906

]]>
https://techliberation.com/2011/07/25/how-do-not-track-is-like-inconceivable/feed/ 0 37906
Privacy Solutions: How to Block Facebook’s “Like” Button And Other Social Widgets https://techliberation.com/2011/05/20/privacy-solutions-how-to-block-facebooks-like-button-and-other-social-widgets/ https://techliberation.com/2011/05/20/privacy-solutions-how-to-block-facebooks-like-button-and-other-social-widgets/#comments Fri, 20 May 2011 20:16:16 +0000 http://techliberation.com/?p=36903

Social widgets, such as the now-ubiquitous Facebook “Like” button and Twitter “Tweet” button, offer users a convenient way to share online content with their friends and followers. These widgets have recently come under scrutiny for their privacy implications. Yesterday, The Wall Street Journal reported that Facebook, Twitter, and Google are informed each time a user visits a webpage that contains one of the respective company’s widgets:

Internet users tap Facebook Inc.’s “Like” and Twitter Inc.’s “Tweet” buttons to share content with friends. But these tools also let their makers collect data about the websites people are visiting. These so-called social widgets, which appear atop stories on news sites or alongside products on retail sites, notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons, according to a study done for The Wall Street Journal.

It wasn’t exactly a secret that social widgets “phone home.” However, the Journal’s story shed new light on how the firms that offer social widgets handle the data they glean regarding user browsing habits. Facebook and Google reportedly store this data for a limited period of time — two weeks and 90 days, respectively — and, importantly, the data isn’t recorded in a way that can be tied back to a user (unless, of course, the user affirmatively decides to “like” a webpage). Twitter reportedly records browsing data as well, but deletes it “quickly.”

Assuming the companies effectively anonymize the data they glean from their social widgets, privacy-conscious users have little reason to worry. I’m not aware of any evidence that social widget data has been misused or breached. However, as Pete Warden reminded us in an informative O’Reilly Radar essay posted earlier this week, anonymizing data is harder than it sounds, and supposedly “anonymous” data sets have been successfully de-anonymized on several occasions. (For more on the de-anonymization of data sets, see Arvind Narayanan and Vitaly Shmatikov’s 2008 research paper on the topic).

While these social widgets may well pose no real threat to privacy, some especially privacy-sensitive users might be wary of the risk of being “tracked” by a social networking service, however small that risk may be. Such concerns aren’t totally unreasonable — if, say, the browsing data collected by Facebook or Google were to be breached and subsequently de-anonymized and tied to authenticated (logged-in) users by malicious actors, the resulting privacy harms could be quite serious.

Fortunately for privacy-conscious users, there are several ways to stop social widgets from collecting data about your browsing habits. As the Journal points out, you can simply log out of your Twitter or Facebook account prior to visiting other websites. Other methods include clearing out your cookies or using your browser’s privacy mode when visiting social networking sites. And, of course, there’s always the “nuclear option” of deleting your social networking accounts entirely.

Perhaps the most convenient, slick way to avoid social widgets is to simply use a browser add-on that selectively disables cross-site requests from Facebook, Twitter, and Google. The WSJ profiled one such add-on, Disconnect, which is compatible with Chrome, Firefox, and Safari.

If you’re a Firefox user, the popular add-on NoScript also offers a robust and effective mechanism for blocking social widgets. To do so, you’ll need to paste a few lines of code in NoScript’s Application Boundaries Enforcer (ABE), a powerful module that allows users to establish custom rules governing scripts and cross-site requests. If you’ve got NoScript installed (get it here), simply go to the ‘Options’ menu, select the ‘Advanced’ tab, then the ‘ABE’ subtab:

After checking the ‘Enable ABE’ box, select the USER Ruleset, then paste in the following lines:

Site .facebook.com .fbcdn.net facebook.net
Accept from SELF
Accept from .facebook.com .fbcdn.net facebook.net
Deny INCLUSION

Site .twitter.com
Accept from SELF
Accept from .twitter.com
Deny INCLUSION

Site .google.com googleapis.com
Accept from SELF
Accept from .google.com
Deny INCLUSION

Then hit ‘Refresh’ and ‘OK’ and you’re all set. If you’ve done this correctly, you should no longer see Facebook, Twitter, or Google widgets. To verify that no data is being transmitted to the companies, install and run HTTP traffic analyzer Fiddler then visit a webpage featuring social widget. If no HTTP request is transmitted to a social networking service, you’re in the clear. Note that this technique also doesn’t affect the functionality of Twitter, Facebook, or Google, so you can still use each of these services with full functionality. If you want to block other social widgets, simply add additional lines to ABE in NoScript in the same manner as above including the domains of the services you wish to block.

As this post hopefully illustrates, privacy-conscious users aren’t helpless; extant technological solutions can address many privacy concerns already, while more robust tools are constantly emerging. As for Facebook, Twitter, and Google, it’s hard to fault them for responding to user demands. Statistics indicate that social widgets are immensely valuable and popular among users, so activating them by default is a sensible decision.

I’d like to see these firms offer a mechanism for authenticated users to opt out of social widget data collection entirely. Greater transparency regarding how the data sets are anonymized would also be welcome. Meanwhile, privacy-conscious users can take matters into their own hands by opting out manually.

]]>
https://techliberation.com/2011/05/20/privacy-solutions-how-to-block-facebooks-like-button-and-other-social-widgets/feed/ 21 36903
Some Thoughts on the Cell Phone Locational Privacy Hullabaloo https://techliberation.com/2011/05/03/some-thoughts-on-the-cell-phone-locational-privacy-hullabaloo/ https://techliberation.com/2011/05/03/some-thoughts-on-the-cell-phone-locational-privacy-hullabaloo/#comments Wed, 04 May 2011 03:18:53 +0000 http://techliberation.com/?p=36629

I spaced out and completely forget to post a link here to my latest Forbes column which came out over the weekend.  It’s a look at back at last week’s hullabaloo over “Apple, The iPhone, and a Locational Privacy Techno-Panic.” In it, I argue:

Some of the concerns raised about the retention of locational data are valid. But panic, prohibition and a “privacy precautionary principle” that would preemptively block technological innovation until government regulators give their blessings are not valid answers to these concerns. The struggle to conceptualize and protect privacy rights should be an evolutionary and experimental process, not one micro-managed at every turn by regulation.

I conclude the piece by noting that:

Public pressure and market norms also encourage companies to correct bone-headed mistakes like the locational info retained by Apple.  But we shouldn’t expect less data collection or less “tracking” any time soon.  Information powers the digital economy, and we must learn to assimilate new technology into our lives.

Read the rest here. And if you missed essay Larry Downes posted here on the same subject last week, make sure to check it out.

]]>
https://techliberation.com/2011/05/03/some-thoughts-on-the-cell-phone-locational-privacy-hullabaloo/feed/ 1 36629
Europe Reimagines Orwell’s Memory Hole https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/ https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/#comments Tue, 16 Nov 2010 19:48:21 +0000 http://techliberation.com/?p=33047

Inspired by thoughtful pieces by Mike Masnick on Techdirt and L. Gordon Crovitz’s column yesterday in The Wall Street Journal, I wrote a perspective piece this morning for CNET regarding the European Commission’s recently proposed “right to be forgotten.”

A Nov. 4th report promises new legislation next year “clarifying” this right under EU law, suggesting not only that the Commission thinks it’s a good idea but, even more surprising, that it already exists under the landmark 1995 Privacy Directive.

What is the “right to be forgotten”?  The report is cryptic and awkward on this important point, describing “the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they [that is, the data] are no longer needed for legitimate purposes.”

The devil, of course, will be in the forthcoming details.  But it’s important to understand that under current EU law, the phrase “their data” doesn’t just mean information a user supplies to a website, social network, or email host.  Any information that refers to or identifies an individual is considered private information under the control of the person to whom it refers.  So “their data” means anyone’s data, even if the individual identified had nothing to do with its collection or storage.

And EU law doesn’t just limit privacy protections to computer data. Users have the right to control information about them appearing in printed and other analog formats as well.

As I say in the piece, the “right to be forgotten” begins to sound like Big Brother’s “memory hole” in Orwell’s classic 1984.  But instead of Winston Smith “rectifying” newspaper articles at the direction of his faceless masters at the Ministry of Truth, a right to be forgotten creates a kind of personal memory hole.  Something you did in the past that you would prefer never happened?  Just issue orders to anyone who knows about, and force them to destroy any evidence.

Of course such a right would be as impractical to enforce as it is ill-conceived to grant.

Both Masnick and Crovitz, in particular, worry about the free speech implications of such a right, both for the press and for individuals.  And those are indeed potentially catastrophic.  Having the power to rewrite history devalues any information, including information that hasn’t been erased.

The social contract operates on facts and the ability to sort out truth from lie.  A right to be forgotten gives every individual the power to rewrite that contract whenever they feel like.  So who would sensibly enter into such a relationship in the first place?

My concern, however, is even more metaphysical.  The privacy debate currently going on in public policy circles is disturbing, perhaps most of all because it is being framed as a policy discussion.  Rather than work out what costs and benefits we get from increased information sharing with each other, those who are feeling anxious about the pace of change in digital life are running, as anxious people often do, to regulators, demanding they do something—anything—to alleviate their future shock.  And regulators, who are pretty anxious people themselves, are too-often happy to oblige, even when they understand neither the technology nor the implications of their lawmaking.

Beyond the worst possible choice of forum to begin a conversation, the privacy debate in its current form is no debate at all.  It is mostly a bunch of emotional people hurling rhetorical platitudes at each other, trading the worst-case examples of the deadly potential of privacy invasions (teen suicides, evil corporations) with fear-inspiring claims of the risk of keeping information secret (terrorists win).

It’s not really a debate at all when the two “sides” are talking about entirely different subjects.  And when no one’s really listening anyway. All that is happening is that the stress level amps up, and those not participating in the discussion get the distinct impression that the world is about to end.

A starting point for a real conversation about privacy—one that is dangerously absent from any of the current lawmaking efforts—is an understanding about the nature of information.  Privacy in general and a right to be forgotten specifically begins with the false assumption that information (private or otherwise) is a kind of property, a discrete, physical item that can be controlled, owned, traded, used up, and destroyed.  (Both “sides” have fallen into this trap, and can’t seem to get out.)

The fight often breaks down into questions of entitlement—who initially owns the information that refers to me?  The person who found it and translated it into a form that could be accessed by others, or the person to whom it refers, regardless of source?  Under what conditions can it be transferred?  Does the individual maintain a universal and inalienable right of rescission—the ability to take it back later, for any reason, and without compensating the person who now has it?

But these are the wrong questions to be asking in the first place.  Information isn’t property, at least not as understood by our industrial-age legal system or popular metaphors of ownership.  Information, from an economic standpoint, is a virtual good.  It can be “possessed” and used by everyone at the same time.  It can become more valuable in being combined with other information.  It can maintain or improve its value forever.

And, whether the law says so or not, it can’t be repossessed, put back in the safety deposit box, buried at sea, or “devoured by the flames” like the old newspaper articles Winston Smith rewrites when the truth turns out to be inconvenient to the past.  That of course was Orwell’s point.  You can send down the memory hole the newspaper that reported Big Brother’s promise of increased chocolate rations, but people still remember that he said it.  You can try to brainwash them, too, and limit their choice of language to eliminate the possibility of unsanctioned thoughts.  You can destroy the individual who rebels against such efforts.

But it still doesn’t work.  The facts, warts and all, are still there, even when their continued existence is subjectively embarrassing to an individual.  Believe me, I wish sometimes it were otherwise.  I would very much like to “rectify” high school, or my parents, or the recent death of my beloved dog.  The truth often hurts.

But burning all the libraries and erasing all the bits in the world doesn’t change the facts.  It just makes them harder to access.  And that makes it harder to learn anything from them.

Maybe the European Commission was just being sloppy in its choice of words.  Perhaps it has something much more limited in mind for a “right to be forgotten.”  Or perhaps as it begins the ugly process of writing actual directives that must then be implemented in law by member countries, it will see both the impossibility and danger of going down this path.

Perhaps they’ll then pretend they never actually promised to “clarify” such a right in the first place.

But we’ll all know that they did.  For whatever it’s worth.

]]>
https://techliberation.com/2010/11/16/europe-reimagines-orwells-memory-hole/feed/ 7 33047
The Privacy and Security Totentanz https://techliberation.com/2010/06/07/the-privacy-and-security-totentanz/ https://techliberation.com/2010/06/07/the-privacy-and-security-totentanz/#comments Tue, 08 Jun 2010 01:21:55 +0000 http://techliberation.com/?p=29504

I participated last week in a Techdirt webinar titled, “What IT needs to know about Law.”  (You can read Dennis Yang’s summary here, or follow his link to watch the full one-hour discussion.  Free registration required.)

The key message of  The Laws of Disruption is that IT and other executives need to know a great deal about law—and more all the time.  And Techdirt does an admirable job of reporting the latest breakdowns between innovation and regulation on a daily basis.  So I was happy to participate.

Legally-Defensible Security

Not surprisingly, there were far too many topics to cover in a single seminar, so we decided to focus narrowly on just one:  potential legal liability when data security is breached, whether through negligence (lost laptop) or the criminal act of a third party (hacking attacks).  We were fortunate to have as the main presenter David Navetta, founding partner with The Information Law Group, who had recently written an excellent article on what he calls “legally-defensible security” practices.

I started the seminar off with some context, pointing out that one of the biggest surprises for companies in the Internet age is the discovery that having posted a website on the World Wide Web, they are suddenly and often inappropriately subject to the laws and jurisdiction of governments around the world.   (How wide is the web? World.)

In the case of security breaches, for example, a company may be required to disclose the incident to affected third parties (customers, employees, etc.) under state law.  At the other extreme, executives of the company handling the data may be criminally-liable if the breach involved personally-identifiable information of citizens of the European Union (e.g., the infamous Google Video case in Italy earlier this year, which is pending appeal).  Individuals and companies affected by a breach may sue the company under a variety of common law claims, including breach of contract (perhaps the violation of a stated privacy policy) or simple negligence.

The move to cloud computing amplifies and accelerates the potential nightmares.  In the cloud model, data and processing are subcontracted over the network to a potentially-wide array of providers who offer economies of scale, application or functional expertise, scalable hardware or proprietary software.  Data is everywhere, and its disclosure can occur in an exploding number of inadvertent ways.  If a security breach occurs in the course of any given transaction, just untangling which parties handled the data—let alone who let it slip out—could be a logistical (and litigation) nightmare.

The Limits of Negligence

Not all security breaches involve private or personal information, but it’s not surprising that the most notable breakdowns (or at least the most vividly-reported) in security are those that expose consumer or citizen data, sometimes for millions of affected parties.  (Some of the most egregious losses have involved government computers left unsecured, with sensitive citizen data unencrypted on the hard drive.)  Consumer computing activity has surpassed corporate computing and is growing much faster.  Privacy and security are topics that are increasingly hard to disentangle

Which is not to say that the bungling of data that affects millions of users necessarily translates to legal consequences for the company who held the information.   Often, under current law, even the most irresponsible behavior by a data handler does not necessarily translate to liability.

For one thing, U.S. law does not require companies to spare no expense in protecting data.  As David Navetta points out, courts may find that despite a breach the precautions taken may have nonetheless been economically sensible, meaning that the precautions taken were justified given the likelihood of a breach and the potential consequences that followed.  Adherence to ISO or other industry standards on data security may be sufficient to insulate a company from liability—though not always.  (Courts sometimes find that industry standards are too lax.)

For the most part, tort law still follows the classic negligence formula of the beatified American jurist Learned Hand, who explained that the duty of courts was to encourage behavior by defendants that made economic sense.  If courts found liability any time a breach occurred, then data handlers would be incentivized to spend inefficient amounts of money on protecting it, leading to net social loss.  (The classic cases involved sparks from locomotives causing fire damage to crops—perfect avoidance of damage, the courts ruled, would cost too much relative to the harm caused and the probability of it occurring.)

That, at least, is the common law regime that applies in the U.S.  The E.U., under laws enacted in support of its 1995 Privacy Directive, follow a different rule, one that comes closer to product liability law, where any failure leads to per se liability for the manufacturer, or indeed for any company in the chain of sales to a consumer.

A case last week from the Ninth Circuit Court of Appeals, however, reminds us that a finding of liability doesn’t necessarily lead to an award of damages.  In Ruiz v. Gap, a job applicant whose personal information was lost when two laptop computers were stolen from a Gap vendor who was processing applications sued Gap, claiming to represent a class of applicants who were victims of the loss.

All of Ruiz’s claims, however, were rejected.  Affirming the lower court and agreeing with most other courts to consider the issue, the Ninth Circuit held that Ruiz could not sue Gap without a showing of “appreciable and actual damage.”  The cost of forward-looking credit monitoring didn’t count (Gap offered to pay Ruiz for that in any case), nor did speculative claims of future losses.  Actual losses, expressible and provable in monetary terms, were required.

The court also rejected claims under California state law and the state constitution, noting that an “invasion of privacy” does not occur until there is actual misuse of the data contained on the stolen laptops.  (Most laptop thefts are presumably motivated by the value of the hardware, not any data that might reside on the hard drive.)

As Eric Goldman succinctly points out, the Ninth Circuit case highlights some odd behavior by plaintiff class action lawyers in the recent hubbub involving Facebook, Google, and other companies who either change their privacy policies or who use customer data in ways that arguably violate that policy.  “[T]he most disturbing thing,” Eric writes, “is that so many plaintiffs’ lawyers seem completely uninterested in pleading how their clients suffered any consequence (negative or otherwise) from the gaffe at all. Their approach appears to be that the service provider broke a privacy promise, res ipsa loquitur, now write us a check containing a lot of zeros.”

A Surprising Lack of Law – And an Alternative Model for Redress

It’s not just the lawyers who are confused here.  U.S. consumers, riled up by stories in mainstream media, seem to live under the misapprehensions that they have some legal right to privacy, or that the protection of personal information that can be enforced in courts against corporations.

That is true in the E.U., but not in the U.S.  The Constitutional “right to privacy” detailed in U.S. Supreme Court decisions of the last fifty years only applies to protections against government behavior.  There is no Constitutional right to privacy that can be enforced against employers, business partners, corporations, parents, or anyone else.

What about statutes?  With a few specific exceptions for medical information, credit history, and a few other categories, there is also no U.S. or for the most part state law that protects consumer privacy against corporations.  There’s no law that requires a website to publish its privacy policy, let alone follow it.  Even if policy constitutes an enforceable contract (not entirely a settled matter of law), the Ruiz case reminds us that breach of contract is irrelevant without evidence of actual monetary damages.

Before storming the barricades demanding justice, however, keep in mind that the law is not the only source of a remedy.  (Indeed, law is rarely the most efficient or effective in any case.)

The lack of a legal remedy for misuse of private information doesn’t mean that companies can do whatever they like with data they collect, or need take no precautions to ensure that information isn’t lost or stolen.

As more and more personal and even intimate data migrates to the cloud, it has become crystal-clear that consumers are increasingly sensitive (perhaps, economically-speaking, over-sensitive) about what happens to it.  Consumers express their unhappiness in a variety of media, including social networking sites, blogs, emails, and tweets.  They can and do put economic pressure on companies whose behavior they find unacceptable:  boycotts, switching to other providers, and through activism that damages the brand of the miscreant.

Even if the law offers no remedy, in other words, the court of public opinion has proven quite effective.  Even without a court ordering them to do so, some of the largest data handlers have made drastic changes to their policies, software, and how they communicate with users.

Looming in the background of these stories is always the possibility that if companies fail to appease their customers, the customers will lobby their elected representatives to provide the kind of legal protections that so far haven’t proven necessary.  But given the mismatch between the pace of innovation and the pace of legal change, legislation should always be the last, not the first, resort.

So expect lots more stories about security breaches, and expect most of them to involve the potential disclosure of personal information.   (That’s one reason that laws requiring disclosure of breaches are a good idea.  Consumers can’t flex their power if they are kept in the dark about behavior they are likely to object to.)

And that means, as we conclude in the seminar, that IT executives making security decisions had better start talking to their counterparts in the general counsel’s office.

Because as hard as it is for those two groups to talk to each other, it’s much harder to have a conversation after a breach than before.  IT makes decisions that affect the legal position of the company; lawyers make decisions that affect the technical architecture of products and services.  The question isn’t whether to formulate a legally-defensible security policy, in other words, but when.

]]>
https://techliberation.com/2010/06/07/the-privacy-and-security-totentanz/feed/ 2 29504
Privacy Innovation: Adobe Flash Supports Private Browsing & Deletes Flash Cookies https://techliberation.com/2010/02/17/privacy-innovation-adobe-flash-supports-private-browsing-deletes-flash-cookies/ https://techliberation.com/2010/02/17/privacy-innovation-adobe-flash-supports-private-browsing-deletes-flash-cookies/#comments Thu, 18 Feb 2010 03:31:40 +0000 http://techliberation.com/?p=26221

At the FTC’s second Exploring Privacy roundtable at Berkeley in January, many of the complaints about online advertising centered on how difficult it was to control the settings for Adobe’s Flash player, which is used to display ads, videos and a wide variety on other graphic elements on most modern webpages, as well the potential for unscrupulous data collectors to “re-spawn” standard (HTTP) cookies even after a user deleted them simply by referencing the Flash cookie on a user’s computer from that domain—thus circumventing the user’s attempt to clear out their own cookies. Adobe to the first criticism by promising to include better privacy management features in Flash 10.1 and by condemning such re-spawning and calling for “a mix of technology tools and regulatory efforts” to deal with the problem (including FTC enforcement). (Adobe’s filing offers a great history of Flash, a summary of its use and an introduction to Flash Cookies, which Adam Marcus detailed here.)

Earlier this week (and less than three weeks later), Adobe rolled out Flash 10.1, which offers an ingenious solution to the problem of how to manage flash cookies: Flash now simply integrates its privacy controls with Internet Explorer, Firefox and Chrome (and will soon do so with Safari). So when the user turns on “private browsing mode” in these browser, the Flash Cookies will be stored only temporarily, allowing users to use the full functionality of the site, but the Flash Player will “automatically clear any data it might store during a private browsing session, helping to keep your history private.” That’s a pretty big step and an elegantly simple to the problem of how to empower users to take control of their own privacy. Moreover:

Flash Player separates the local storage used in normal browsing from the local storage used during private browsing. So when you enter private browsing mode, sites that you previously visited will not be able to see information they saved on your computer during normal browsing. For example, if you saved your login and password in a web application powered by Flash during normal browsing, the site won’t remember that information when you visit the site under private browsing, keeping your identity private.

Our friends at PrivacyChoice applauded this move but suggest that Adobe ought to take the browser-integration concept one step further such that, when “Consumers …  clear their browsing history using native browser controls, they wipe the slate clean with respect to cookies.” To the extent that Flash Cookies are, indeed, actually being used just like standard cookies for tracking purposes, I think that kind of control would indeed implement the reasonable expectation of consumers. Indeed, Adobe is already working on how to implement this technologically complicated solution (which should fix the re-spawning problem), as they noted in their FTC filing:

Adobe has approached the major browser companies to determine whether there is an efficient way to provide users the opportunity to control their Flash Local Storage (and all Local Storage for that matter) when they set their browser privacy settings. We will continue to pursue these efforts and encourage browsers companies to work with us to address the needs of our common customers–in particular to ensure that users can set preferences and clear Local Storage (for Adobe Flash Player and other technologies using Local Storage) in the place where they have learned to set their privacy settings. Without this, we could solve the issue for Flash Player and see developers move towards other technologies to accomplish the same type of misuse and abuse that you see with Flash Local Storage today.

So, I look forward to seeing Adobe continue its privacy-innovation in a future version of Flash by implementing some kind of a feature in browsers that lets users delete their Flash Cookies along with their HTTP cookies.

A Similar Approach to “Browser Fingerprints”?

Similarly, I look forward to seeing future browsers addressed the problem raised by the eagle-eyed watch dogs at the Electronic Frontier Foundation:

When you visit a website, you are allowing that site to access a lot of information about your computer’s configuration. Combined, this information can create a kind of fingerprint — a signature that could be used to identify you and your computer. But how effective would this kind of online tracking be?

Peter Eckersley explains how such identification could occurred here.

turns out that, in addition to the commonly discussed “identifying” characteristics of web browsers, like IP addresses and tracking cookies, there are more subtle differences between browsers that can be used to tell them apart. One significant example is the User-Agent string, which contains the name, operating system and precise version number of the browser, and which is sent every web server you visit. A typical User Agent string looks something like this: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6

Check out EFF’s Panopticlick Project to see what such information your browser is sending. I won’t pretend to be any expert on the technical back-end of this, but I’m pretty optimistic we’ll see a solution to this problem implemented at either the browser level or the OS level so that, if you decide you really want to shroud your browsing by using private browsing mode, you at least have the option of making your browsing really private by suppressing transmission of this information or just sending a set of standard answers that don’t uniquely identify you. (Of course, some users might actually want to use private browsing mode in a less-private mode that continues to send this information if, for example, the quality of my browsing experience is affected because webpages aren’t optimized for my screen resolution, browser version or OS, etc.)

While I’m reasonably confident that the OS and browser makers will eventually solve this problem (which appears to be theoretical at this point, with no evidence that any advertisers are actually doing this sort of identification—say, to re-spawn deleted cookies), I can appreciate that some people might say that solution would be too slow in coming. I’d caution them against giving up on private browsing mode as an inadequate form of user empowerment—and leaping to the conclusion that only regulation can really fix the problem. Regulation comes at a cost, as Adam and I have repeatedly noted, given the benefits to users from data sharing. But moreover, it won’t protect anyone from truly bad actors that ignore U.S. regulation—which is a rather large problem since the Internet is a global medium, and we Internet browsers can’t just assume we’re safe because the U.S. government is regulating data collection.

So even if you think we need regulation, we clearly have to keep working on privacy-enhancing technologies. And if you don’t think my suggestion of simply applying Adobe’s approach of bolstering private browsing mode is going to happen quickly enough (or at all) and you thus conclude that the government needs to get involved, why assume that the government intervention should be sweeping proscriptive regulation of how data is collected and used online? Why not first start with the “less restrictive” alternative of having the government try to assist in brokering a deal among the key players to make this technology work?

Or perhaps the government could even help to fund the development of such technologies in the same way Secretary Clinton recently announced the State Department would “establish a standing effort that will harness the power of connection technologies and apply them to our diplomatic goals,” such as “supporting the development of new tools that enable citizens to exercise their rights of free expression by circumventing politically motivated censorship?” To be sure, we don’t want government funding or design in this space to crowd out private innovation, but figuring out how to avoid that problem while doing something constructive to improve privacy enhancing technologies would be far easier than trying to decide how to weigh the trade-offs inherent in data regulation.

And, hey, if I can truly shroud my browsing activity from tracking just by turning on private browsing mode, what’s the problem with tracking, again? If you think people don’t know enough about how to protect themselves, let’s give the FTC money to educate consumers—something they’re very good at, as the YouAreHere campaign for teens they launched late last year demonstrates.

]]>
https://techliberation.com/2010/02/17/privacy-innovation-adobe-flash-supports-private-browsing-deletes-flash-cookies/feed/ 5 26221
Privacy Solutions Part 8: The Best Anonymizer Available: Tor, the TorButton & TorBrowser https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/ https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/#comments Tue, 10 Nov 2009 21:15:04 +0000 http://techliberation.com/?p=23299

By Eric Beach and Adam Marcus

In the previous entry in the Privacy Solutions Series, we described how privacy-sensitive users can use proxy servers to anonymize their web browsing experience, noting that one anonymizer stood out above all others: Tor, a sophisticated anonymizer system developed by the Tor Project, a 501(c)(3) U.S. non-profit venture supported by industry, privacy advocates and foundations, whose mission is to “allow you to protect your Internet traffic from analysis.” The Torbutton plug-in for Firefox makes it particularly easy to use Tor and has been downloaded over three million times. The TorBrowser Bundle is a pre-configured “portable” package of Tor and Firefox that can run off a USB flash drive and does not require anything to be installed on the computer on which it is used. Like most tools in the Privacy Solutions series, Tor has its downsides and isn’t for everyone. But it does offer a powerful tool to privacy-sensitive users in achieving a degree of privacy that no regulation could provide.

Why Use Tor?

The Tor Project identifies its users as parents, militaries, journalists, law enforcement offers, activists, whistleblowers, and others. But on a high level, Tor addresses essentially four problems:

(1) Outbound blocking of internet traffic by IP or domain name. Countries, businesses, and Internet service providers may block web-users from accessing certain IPs associated with domain names that are deemed inappropriate. For example, access to certain domain names from inside some United Stated Federal government computer networks is restricted, some companies block pornography and some governments may censor access to some websites.

(2) Blocking of Internet traffic based upon content analysis. Rather than simply relying on website blacklists, many countries use content-based filtering to prevent individuals from seeking out information deemed undesirable. For example, the Chinese government censors searches for “falun gong” through packet inspection and analysis.

(3) ISP traffic logging. With the increased use of deep packet inspection, some privacy-sensitive Internet users worry that Internet service providers may be capable of logging the online activity of millions of Americans, and providing that information to governments or other third parties (lawfully or otherwise).

(4) Government monitoring. With the United States government’s pervasive surveillance of the electronic activities of Americans, some citizens understandably desire to protect their First Amendment right to anonymously send and receive information-i.e., without the government being able to determine their identity.

How Tor Works

The general web data flow online looks something like this:

As we mentioned in our piece about anonymizers, a sophisticated anonymizer can obscure the identity of any one web user by pooling requests from large numbers of users across a “daisy chain” of proxy servers-thus effectively anonymizing the user’s identity, like so:

Tor works somewhat differently: Rather than simply trying to achieve “anonymity in a crowd” (of other web users using the network), Tor’s “client software” (e.g., TorButton) picks a random path through a network of other “Tor nodes” (users of Tor) for every request sent from the user’s computer. As the Tor Project explains:

Tor helps to reduce the risks of both simple and sophisticated traffic analysis by distributing your transactions over several places on the Internet, so no single point can link you to your destination. The idea is similar to using a twisty, hard-to-follow route in order to throw off somebody who is following you – and then periodically erasing your footprints. Instead of taking a direct route from source to destination, data packets on the Tor network take a random pathway through several relays that cover your tracks so no observer at any single point can tell where the data came from or where it’s going.

Tor thus achieves a high degree of anonymity, relying “not on the trustworthiness of individual servers but rather on the network design, which prevents a given router from knowing both the origin and the destination or even which other routers it would need to cooperate with to get that information.”

The following chart from the Tor Project’s more extensive explanation conveys the basics:

How to Install Tor

As mentioned above, Firefox users can install the TorButton plug-in, which will allow users to turn Tor on or off as desired.

The Tor Project also offers TorBrowser, an all-in-one bundle of the portable edition of Firefox (which can be carried along with all its settings on a USB stick or CD) pre-configured with the Tor plug-in. There is also a version of TorBrowser that includes the Pidgin instant messaging client, for those who also want to protect their instant messaging. Set-up takes less than three minutes and is just the thing for those trying to stay “one step ahead of The Man.” For more help on how to install the TorBrowser, click here or here.

Downsides/Risks of Tor

Speed. The biggest downside of using Tor is its slowness, which occurs for three reasons:

  1. Tor transports data among many intermediary nodes. Just as it takes considerably longer to drive from Los Angles to San Francisco if you travel though Phoenix, Dallas, and Denver, so it takes considerably longer to go from the end-user to the final destination if the data packets must transfer through four or five intermediaries.
  2. Tor encrypts the data between the intermediary nodes.
  3. Some intermediary nodes do not have high-bandwidth connections.

The following examples from an informal survey illustrate just how much Tor can slow down web browsing:

Domain

Time for Direct Access

Time for Tor Access

cnn.com

28.1 seconds

188 seconds

baidu.com

2.2 seconds

9.34 seconds

google.de

1.89 seconds

7.5 seconds

pff.org

15.87 seconds

74 seconds

Note: The results of the speed test depend heavily upon the specific Tor route used. Stopping Tor and then re-enabling it would likely produce a materially different result since the speed of the intermediary and exit-nodes would likely be different.

While Tor is slow, it can be improved mildly by changing a number of default configuration options. See here, here, here and here.

Increased Vulnerability. The second major downside is that the exit-node could record your data or perform a number of malicious attacks, as explained by Ars Technica and SecurityFocus.com. As the Berkman Center’s 2007 Circumvention Report noted, “Tor provides strong anonymity only if the user is careful to submit data to HTTPS protected servers.” If you plan to use Tor, you should consult the following Tor security warnings:

  • REMARK(S) ABOUT USING CONFIDENTIAL DATA ON (INSECURE) NON-HTTPS/SSL-CONNECTIONS: If you’re planning to visit password protected sites on non-encrypted connections, keep in mind that some exit-nodes record the passwords and possibly use them for abuse. Also all other transferred data is possibly recorded and misused.
  • REMARK(S) ABOUT ACCESSING ELECTRONIC BANKING AND OTHER SENSITIVE SITES VIA TOR: Most banks and similar institutions (PayPal for example) are using extended fraud countermeasures, like IP-origin plausibility checks and anonymous server blacklistings. Therefore you risk getting your bank account locked for security reasons by using the Tor-network.
  • REMARK(S) ABOUT (SECURE) HTTPS/SSL-CONNECTIONS TO FRAUD CRITICAL SITES: If you’re planning to visit fraud critical HTTPS/SSL-secured sites (Banks for example) and that specific site is querying you unexpectedly about accepting a new SSL-Certificate, be highly alert. Check the Certificate data or try another EXIT-node first. There are some rumors around, that some EXIT-nodes are trying to fake/highjack such HTTPS/SSL-connections.
]]>
https://techliberation.com/2009/11/10/privacy-solutions-part-8-the-best-anonymizer-available-tor-the-torbutton-torbrowser/feed/ 11 23299
Privacy Solutions Part 7: How Anonymizers Can Empower Privacy-Sensitive Users https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/ https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/#comments Tue, 10 Nov 2009 19:56:15 +0000 http://techliberation.com/?p=23296

By Eric Beach & Adam Marcus

Among Internet users, there are a variety of concerns about privacy, security and the ability to access content. Some of these concerns are quite serious, while others may be more debatable. Regardless, the goal of this ongoing series is to detail the tools available to users to implement their own subjective preferences. Anonymizers (such as Tor) allow privacy-sensitive users to protect themselves from the following potential privacy intrusions:

  1. Advertisers Profiling Users. Many online advertising networks build profiles of likely interests associated with a unique cookie ID and/or IP address. Whether this assembling of a “digital dossier” causes any harm to the user is debatable, but users concerned about such profiles can use an anonymizer to make it difficult to build such profiles, particularly by changing their IP address regularly.
  2. Compilation and Disclosure of Search Histories. Some privacy advocates such as EFF and CDT have expressed legitimate concern at the trend of governments subpoenaing records of the Internet activity of citizens. By causing thousands of users’ activity to be pooled together under a single IP address, anonymizers make it difficult for search engines and other websites–and, therefore, governments–to distinguish the web activities of individual users.
  3. Government Censorship. Some governments prevent their citizens from accessing certain websites by blocking requests to specific IP addresses. But an anonymizer located outside the censoring country can serve as an intermediary, enabling the end-user to circumvent censorship and access the restricted content.
  4. Reverse IP Hacking. Some Internet users may fear that the disclosure of their IP address to a website could increase their risk of being hacked. They can use an anonymizer as an intermediary between themselves and the website, thus preventing disclosure of their IP address to the website.
  5. Traffic Filtering. Some ISPs and access points allocate their Internet bandwidth depending on which websites users are accessing. For example, bandwidth for information from educational websites may be prioritized over Voice-over-IP bandwidth. Under certain circumstances, an anonymizer can obscure the final destination of the end-user’s request, thereby preventing network operators or other intermediaries from shaping traffic in this manner. (Note, though, that to prevent deep packet inspection, an anonymizer must also encrypt data).

How Anonymizers Work

A Simple Anonymizer

An anonymizer is an intermediary server between the end-user and the website that acts as a proxy for the user, effectively accessing websites on the end-user’s behalf, thereby hiding the end-user’s IP address (and perhaps other information).

Simple anonymizer diagram

A Real-World Analogy: Let’s say I want to order pizza from the local pizza shop, but I do not want them to have my phone number, which they could get from caller ID if I called them directly. Instead of calling them myself, I could call a friend and ask him to call them on my behalf, place my order, and then let me know how much it will cost and the estimated delivery time.

A Somewhat More Complicated Anonymizer Setup

A more sophisticated (and more realistic) anonymizer setup pools hundreds or even thousands of end-users through one or more anonymizing intermediaries. Consequently, web servers receive requests that originated from hundreds of end-users through a single IP address (that of the anonymizer). As a result, the web server is unable to distinguish and personally identify the IP addresses of any users.

Complicated anonymizer diagram

An Even More Complicated Anonymizer Setup

The above setup provides a layer of privacy beyond the traditional setup of direct end-user-to-website communication. But even so, if the anonymizer’s logs are compromised, so too is the privacy of the end-user -because it will likely be possible to associate specific requests with individual users.

A much greater degree of privacy protection is obtained by “daisy-chaining” together multiple anonymizers, but every additional hop slows down the browsing experience and leaves additional traces of the end-user’s traffic.

More complicated anonymizer diagram

How Do I Set Up an Anonymizer?

A variety of anonymizer services exist. Due to considerable variations in how each is installed, it is impossible to provide universal step-by-step details for installing one. But perhaps the two most trustworthy (free) options are Tor and Privoxy. While both services have experienced occasional vulnerabilities and hiccups, they are the best established among anonymizers. Other providers include: CGIProxy, AlchemyPoint, Nginx, SafeSquid, Squid, and yProxy. Since each anonymizer works differently and comes with its own set of pros, cons, and risks, it is extremely important to see whether a specific anonymizer meets your specific needs.

What Are the Downsides and Risks of an Anonymizer?

While an anonymizer offers considerable benefit to end-users concerned about the risks mentioned above, it is not a “silver bullet” or a “privacy panacea.” To start, an anonymizer is primarily a privacy tool, not a security tool (except insofar as sharing your IP address may increase your vulnerability to some cyber-attacks). In other words, an anonymizer does nothing to protect the integrity of your data as it is sent to and from the web server. Moreover, using an anonymizer may increase your vulnerability to a cross-site request forgery, cookie stealing, and, in particular, simple packet sniffing. Beyond the security risks to your data, anonymizers may increase a number of other potential privacy risks:

(1) Anonymizer Recordkeeping. If an anonymizing intermediary server is located within a country that requires ISPs and other service providers to keep records of traffic, your browsing habits are not invisible. The government or an authorized third party could subpoena or seize the history of your browsing activity from the anonymizer.

(2) Man in the Middle Attacks. By routing your traffic through intermediaries, you increase your exposure to man-in-the-middle attacks.

(3) Selling Browsing Records. An anonymizer could sell or provide unauthorized access to your browsing history. If you transmit sensitive unencrypted data through an anonymizer, you are taking a considerable security risk.

(4) Login-Based Records. Some web services such as Google’s Web History record a significant amount of end-user behavior based upon voluntary user login. In other words, when logged in to your Google account, your Google search behavior (among other things) will be personally identifiable by Google-even if you are using an anonymizer.

(5) TCP Only. When most end-users access the Internet, they utilize many different services (e.g., email, Internet, teleconferencing) and these differing services often require different network protocols. “Packet sniffer” tools such as WireShark will let you examine the protocols and packets sent and received by your computer. Because many anonymizers do not handle non-TCP traffic, they would not anonymize other online activities such as Voice-over-IP phone calls.

]]>
https://techliberation.com/2009/11/10/privacy-solutions-part-7-how-anonymizers-can-empower-privacy-sensitive-users/feed/ 15 23296
Google’s Privacy Dashboard: Another Major Step Forward in User Empowerment & Transparency https://techliberation.com/2009/11/05/googles-privacy-dashboard-another-major-step-forward-in-user-empowerment-transparency/ https://techliberation.com/2009/11/05/googles-privacy-dashboard-another-major-step-forward-in-user-empowerment-transparency/#comments Thu, 05 Nov 2009 15:18:59 +0000 http://techliberation.com/?p=23198

Remember, remember the Fifth of November, The Gunpowder Treason and Plot Privacy Dashboard, so hot, I know of no reason Why the Gunpowder Treason Privacy Dashboard Should ever be forgot. Sorry, I couldn’t resist, this being Guy Fawkes day (a major traditional holiday for Britons and, more recently, geeky American libertarians such as myself, who dress up as V for Vendetta for Halloween). Google’s announcement of its Privacy Dashboard (TechCrunch) is a major step forward in both informing users about what data Google has tied to their account in each of Google’s many products and in empowering users to easily manage their privacy settings for each product. If users decide they’d rather “take their ball and go home,” they can do that, too, by simply deleting their data. Users can access the dashboard at www.google.com/dashboard (duh). Or, from the Google homepage, you just have to:

]]>
https://techliberation.com/2009/11/05/googles-privacy-dashboard-another-major-step-forward-in-user-empowerment-transparency/feed/ 8 23198
Announcing PFF’s Taxonomy of Online Security & Privacy Threats https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/ https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/#comments Fri, 30 Oct 2009 17:51:34 +0000 http://techliberation.com/?p=23131

PFF summer fellow Eric Beach and I have been working on what we hope is a comprehensive taxonomy of all the threats to online security and privacy. In our continuing Privacy Solutions Series, we have discussed and will continue to discuss specific threats in more detail and offer tools and methods you can use to protect yourself.

The taxonomy is located here.

The taxonomy of 21 different threats is organized as a table that indicates the “threat vector” and goal(s) of attackers using each threat. Following the table is a glossary defining each threat and providing links to more information.Threats can come from websites, intermediaries such as an ISP, or from users themselves (e.g. using an easy-to-guess password). The goals range from simply monitoring which (or what type of) websites you access to executing malicious code on your computer.

Please share any comments, criticisms, or suggestions as to other threats or self-help privacy/security management tools that should be added by posting a comment below.

]]>
https://techliberation.com/2009/10/30/announcing-pffs-taxonomy-of-online-security-privacy-threats/feed/ 4 23131
Abandoning the Dumb Federal Cookie Policy https://techliberation.com/2009/08/11/abandoning-the-dumb-federal-cookie-policy/ https://techliberation.com/2009/08/11/abandoning-the-dumb-federal-cookie-policy/#comments Tue, 11 Aug 2009 19:32:59 +0000 http://techliberation.com/?p=20284

Today’s Washington Post has a story entitled U.S. Web-Tracking Plan Stirs Privacy Fears. It’s about the reversal of an ill-conceived policy adopted nine years ago to limit the use of cookies on federal Web sites.

In case you don’t already know this, a cookie is a short string of text that a server sends a browser when the browser accesses a Web page. Cookies allow servers to recognize returning users so they can serve up customized, relevant content, including tailored ads. Think of a cookie as an eyeball – who do you want to be able to see that you visited a Web site?

Your browser lets you control what happens with the cookies offered by the sites you visit. You can issue a blanket refusal of all cookies, you can accept all cookies, and you can decide which cookies to accept based on who is offering them. Here’s how:

  • Internet Explorer: Tools > Internet Options > “Privacy” tab > “Advanced” button: Select “Override automatic cookie handling” and choose among the options, then hit “OK,” and next “Apply.”

I recommend accepting first-party cookies – offered by the sites you visit – and blocking third-party cookies – offered by the content embedded in those sites, like ad networks. (I suspect Berin disagrees!) Or ask to be prompted about third-party cookies just to see how many there are on the sites you visit. If you want to block or allow specific sites, select the “Sites” button to do so. If you selected “Prompt” in cookie handling, your choices will populate the “Sites” list.

  • Firefox: Tools > Options > “Privacy” tab: In the “cookies” box, choose among the options, then hit “OK.”

I recommend checking “Accept cookies from sites” and leaving unchecked “Accept third party cookies.” Click the “Exceptions” button to give site-by-site instructions.

There are many other things you can do to protect your online privacy, of course. Because you can control cookies, a government regulation restricting cookies is needless nannying. It may marginally protect you from government tracking – they have plenty of other methods, both legitimate and illegitimate – but it won’t protect you from tracking by others, including entities who may share data with the government.

The answer to the cookie problem is personal responsibility. Did you skip over the instructions above? The nation’s cookie problem is your fault.

If society lacks awareness of cookies, Microsoft (Internet Explorer), the Mozilla Foundation (Firefox), and producers of other browsers (Apple/Safari, Google/Chrome) might consider building cookie education into new browser downloads and updates. Perhaps they should set privacy-protective defaults. That’s all up to the community of Internet users, publishers, and programmers to decide, using their influence in the marketplace. (I suspect Berin is against it!)

Artificially restricting cookies on federal Web sites needlessly hamstrings federal Web sites. When the policy was instituted it threatened to set a precedent for broader regulation of cookie use on the Web. Hopefully, the debate about whether to regulate cookies is over, but further ‘Net nannying is a constant offering of the federal government (and other elitists).

By moving away from the stultifying limitation on federal cookies, the federal government acknowledges that American grown-ups can and should look out for their own privacy.

]]>
https://techliberation.com/2009/08/11/abandoning-the-dumb-federal-cookie-policy/feed/ 10 20284
Privacy Solutions: Overview, Encryption & Anonymization https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/ https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/#comments Thu, 06 Aug 2009 19:45:35 +0000 http://techliberation.com/?p=19990

By Eric Beach, Adam Marcus & Berin Szoka

In the first entry of the Privacy Solution Series, Berin Szoka and Adam Thierer noted that the goal of the series is “to detail the many ‘technologies of evasion’ (i.e., empowerment or user ‘self-help’ tools) that allow web surfers to better protect their privacy online.” Before outlining a few more such tools, we wanted to step back and provide a brief overview of the need for, goals of, and future scope of this series.

Smokey the Bear with signWe started this series because, to paraphrase Smokey the Bear, “Only you can protect your privacy online!” While the law can play a vital role in giving full effect to the Fourth Amendment’s restraint on government surveillance, privacy is not something that cannot simply be created or enforced by regulation because, as Cato scholar Jim Harper explains, privacy is “the subjective condition that people experience when they have power to control information about themselves.” Thus, when the appropriate technological tools and methods exist and users “exercise that power consistent with their interests and values, government regulation in the name of privacy is based only on politicians’ and bureaucrats’ guesses about what ‘privacy’ should look like.” As Berin has put it:

Debates about online privacy often seem to assume relatively homogeneous privacy preferences among Internet users. But the reality is that users vary widely, with many people demonstrating that they just don’t care who sees what they do, post or say online. Attitudes vary from application to application, of course, but that’s precisely the point: While many reflexively talk about the ‘importance of privacy’ as if a monolith of users held a single opinion, no clear consensus exists for all users, all applications and all situations.

Moreover, privacy and security are both dynamic: The ongoing evolution of the Internet, shifting expectations about online interaction, and the constant revelations of new security vulnerabilities all make it impossible to simply freeze the Internet in place. Instead, users must be actively engaged in the ongoing process of protecting their privacy and security online according to their own preferences.

Our goal is to educate users about the tools that make this task easier. Together, user education and empowerment form a powerful alternative to regulation. That alternative is “less restrictive” because regulatory mandates come with unintended consequences and can never reflect the preferences of all users.

Many forthcoming Privacy Solution Series entries will describe tools that fit into two broad categories:

  • Encryption (protecting communications): The scrambling of content to protect against unauthorized viewing.
  • Anonymization (protecting identity): Paradoxically, the Internet offers an unprecedented degree of both anonymity and transparency/track-ability. While most behavior online does leave a plethora of tracks in the form of ISP records, server logs, and cookie IDs, users can achieve a significantly greater degree of privacy online by blocking data collection mechanisms like cookies or routing traffic through a non-monitored server.

For some, one category is more important than the other. For example, some believe that public message boards are more civil when users are prohibited from posting anonymously and posts are signed with the user’s real name instead of a made-up “handle.” But these same people may feel very strongly that the content of emails should be protected ( i.e., encrypted) so that only the intended recipient can view them.

In other situations and/or for other people, the exact opposite may be true. A user might not care that Gmail scans their email to provide targeted advertising as long as Google does not associate that information with their actual identity.

Regulatory solutions inevitably fail to recognize such complexity and even inconsistency of user preferences. By contrast, user empowerment offers diverse solutions for a diverse citizenry.

Additional information about encryption, anonymity & other technologies of evasion

  • Bruce Schneier’s Applied Cryptography (available online in part in an older version online), is considered one of the definitive works about encryption for the layman.
  • Access Denied: the Practice and Policy of Global Internet Filtering, published in 2008 by Harvard’s Berkman Center, discusses encryption and technologies of evasion, while also describing current filtering and censoring efforts in many countries. You can view much of the book at the OpenNet initiative or preview the book at Google Books. Berkman’s 2007 Circumvention Landscape Report outlines technologies of censorship and technologies of evasion in an applied context.
  • The Electronic Frontier Foundation offers an excellent introduction to the basics of encryption as part of its Surveillance Self-Defense Project.
  • The Handbook for Bloggers and Cyberdissidents published by Reporters Without Borders, which details techniques for circumventing censorship.
]]>
https://techliberation.com/2009/08/06/privacy-solutions-overview-encryption-anonymization/feed/ 21 19990
Privacy Solutions (Part 5): CCleaner https://techliberation.com/2009/07/17/privacy-solutions-part-5-ccleaner/ https://techliberation.com/2009/07/17/privacy-solutions-part-5-ccleaner/#comments Fri, 17 Jul 2009 19:06:33 +0000 http://techliberation.com/?p=19501

CCleanerby Eric Beach & Adam Thierer

In our ongoing “Privacy Solutions Series” we have been outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online. These tools and methods form an important part of a layered approach that we believe offers a more effective alternative to government-mandated regulation of online privacy. [See entries 1, 2, 3, 4]  In this installment, we will be exploring CCleaner, a free Windows-based tool created by UK-based software developer Piriform that scrubs you computer’s hard drive and cleans its registry. We’ll describe how CCleaner helps you destroy data and protect your private information.

Whenever you move files to the recycling bin and subsequently purge the recycling bin, the affected files remain on your computer. In other words, deleting files from the recycling bin does not remove them from the computer. The reason for this is important and, in many ways, beneficial. In some respects, many computer file systems work like an old library catalog system. A file is like a catalog card and contains the reference to the actual place on the hard drive where the information contained in the file is stored. When a user deletes a file, the computer does not actually clean all the affected hard drive space. Instead, to extend the analogy, the computer simply removes the card catalog entry that points to the hard drive space where the file is contained and frees up this space for new files. The reason this is usually beneficial is that cleaning the hard drive space occupied by a file can take a while. If you want evidence of this, look no further than the length of time required to reformat a hard drive (reformatting a hard drive actually clears the disk’s contents). The practical implication of the way hard drives work is that when you delete an important memo from your computer, it is not actually gone. Similarly, when you clear your browsing history, it is not gone. The bottom line is that an individual who can access your hard drive (a thief, the government, etc.) could view many or all of the files you deleted.

The solution to this problem is to ensure that when a file is deleted, the space on the hard drive occupied by that file is not simply flagged as available space but is entirely rewritten with unintelligible data. One of the best programs for accomplishing this is CCleaner (which formerly stood for “Crap Cleaner”!)

CCleaner enables you to select a host of potentially sensitive files (e.g., recycling bin, browser history, memory dumps, and cookies) and definitively delete them by writing over them at the root of the file system. In particular, CCleaner enables the user to choose whether files should be entirely overwritten once, thrice (DOD 5220.22-M standard), seven times (NSA standard), or 37 times (Gutmann standard). The end result of this is that users can entirely remove a file from their machines. As an added benefit, CCleaner also allows users to delete files that may not be sensitive in nature, but are not necessary for everyday computer tasks and as a result, their continued presence slows down the computer.

The best part of CCleaner is that it is free, stable, safe, and extremely easy to use. It has won numerous awards and, according to the CCleaner website, the tool has been downloaded an astounding 300 million times.

To download CCLeaner, visit http://www.ccleaner.com or http://download.cnet.com/ccleaner. More information about CCleaner is embedded down below, including a couple of YouTube videos. The most important tip to using CCleaner is ensuring that all files that are deleted from the recycling bin are subsequently overwritten (and therefore cannot be uncovered by someone who later accesses your hard drive).  This feature is not enable by default. To turn it on, do the following: (1) Open CCleaner (2) Click on “Options” from the bar on the left hand side of the program. (3) Click on “Settings”. (4) Click on “Secure file deletion (Slower)”.  The adjoining exhibit shows what that screen looks like.

CClearner

For more information about CCleaner, please see the following helpful sites:

http://www.youtube.com/v/8wqegYPb_Ms&hl=en&fs=1& http://www.youtube.com/v/5rqAgZedH60&hl=en&fs=1& http://www.youtube.com/v/amPq1mG87Ic&hl=en&fs=1&]]>
https://techliberation.com/2009/07/17/privacy-solutions-part-5-ccleaner/feed/ 18 19501
Privacy Solutions (Part 4): Firefox Privacy Features https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/ https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/#comments Mon, 16 Mar 2009 16:29:29 +0000 http://techliberation.com/?p=17401

Firefox logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes. These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In the last installment, we covered the privacy features embedded in Microsoft’s Internet Explorer (IE) 8. This installment explores the privacy features in the Mozilla Foundation’s Firefox 3, both the current 3.0.7 version and the second beta for the next release, 3.5 (NOTE – The name for the next version of Firefox was just changed from 3.1 to 3.5 to reflect the large number of changes, but the beta is still named 3.1 Beta 2). We’ll make it clear which features are new to 3.1/3.5 and those which are shared with 3.0.7. Future installments will cover Google’s Chrome 1.0, Apple’s Safari 4, and some of the more useful privacy plug-ins for browsers . The availability and popularity of privacy plug-ins for Firefox such as AdBlock (which we discussed here), NoScript and Tor significantly augments the privacy management capabilities of Firefox beyond the capability currently baked into the browser.  In evaluating the Web browsers, we examine:

(1) cookie management; (2) private browsing; and (3) other privacy features

History of Firefox

Firefox descends from the very first graphical web browser, NCSA Mosaic. Mosaic was developed at the National Center for Supercomputing Applications in 1992. The co-author of Mosaic, Marc Andreessen, co-founded Netscape Communications and was the lead developer of Netscape Navigator, which was first released in 1994 and based in part on NCSA Mosaic code. In 1998, Netscape publicly released the source code for the latest version of its browser and created the Mozilla Organization to coordinate its development. AOL acquired Netscape Communications later that year, and when AOL scaled back its involvement with the Mozilla Organization in 2003, the Mozilla Foundation was launched to ensure the browser could survive without Netscape or AOL. The Mozilla Foundation released Firefox 1.0 on November 9, 2004. According to Net Applications, Firefox is currently the second-most popular Web browser after Internet Explorer, with 21.72% of the market in Q1 2009.

Cookie Management

To access Firefox’s basic cookie management and privacy settings, open the “Tools” menu, click “Options,” and then click on the “Privacy” tab to display the following options:

Options dialog box

Instead of using a slider, as Internet Explorer does, Firefox gives more direct control over cookies. Users can choose to refuse all cookies, refuse all third-party cookies (see the previous post in this series for an explanation of the difference between first-party cookies and third-party cookies), and/or control when cookies expire. The “keep until” box gives three options:

(1) ” they expire” – Cookies determine their own expiration date.

(2) ” I close Firefox” – Cookies are deleted when you close the browser.

(3) ” ask me every time” – Every time a cookie is sent to the user’s computer, the user is asked if they want to “Allow” the cookie (accept it and let the cookie determine its own expiration date), “Allow for Session” (equivalent to the “I close Firefox” setting), or “Deny.” Firefox can also optionally save the user’s preference for all future cookies received from that website. The “Show Details” button allows true power users to view the contents of each cookie before making a decision, as seen here:

Confirm setting cookie dialog box

By clicking the “Show Cookies” button in the Privacy tab of the Options dialog box, users can view all of the cookies already saved on their computer and delete individual cookies or all cookies at once.

Cookies dialog box

Finally, by clicking the “Exceptions” button in the Privacy tab of the Options dialog box, users can specify which websites are always or never allowed to set cookies.

Exceptions dialog box

In addition to having the option of deleting all cookies whenever the browser is closed, users can clear other types of private data when the browser is closed. The following dialog box is displayed when a user clicks on the “Settings” button in the Privacy tab of the Options dialog box.

Clear Private Data dialog box

Private Browsing

Private Browsing iconSimilar to Internet Explorer 8’s “InPrivate Browsing” feature (see the previous post in this series for more information) and Chrome’s Incognito, Firefox 3.5 will include a new “Private Browsing Mode” that protects so-called “over the shoulder” privacy. To enable Private Browsing Mode, select “Private Browsing” from the Tools menu. To disable Private Browsing Mode and reload all tabs that appeared when you enabled Private Browsing Mode, just uncheck the same “Private Browsing” menu item in the Tools menu. There is a hidden way to make Firefox 3.1 Beta 2 always start in Private Browsing Mode and a plan to possibly provide an easier way to do this in the final 3.5 release, but the only obvious use for this would be on public computers (e.g., at a library or coffee shop) where it can’t be guaranteed that each user will close the browser before leaving.

Other Privacy Features

  • Master Password – As more and more can be done online and more and more sites require user accounts (and passwords), having all those passwords stored in your web browser can be a security problem unto itself. Firefox allows you to view saved passwords, but it also allows you to protect all of your site-specific saved passwords with a single master password. Your saved passwords cannot be used to automatically log into websites and other individuals with access to your computer cannot view your saved passwords unless the master password is entered. Firefox also has a password quality meter to show you how secure your master password is from cracking attempts.
  • Instant Web Site ID – For all websites with an Extended Validation SSL Certificate, this feature displays the website owner’s name to the left of the URL in the address bar. Clicking on the “favicon” on the left side of the address bar displays additional information about the certificate (whether an Extended Validation Certificate or regular SSL certificate) and whether the connection is SSL-encrypted. A second click displays the Page Info dialog box which reports whether you’ve previously visited the website and how many times, whether the website is storing cookies on your computer (which you can view with another click), and if there are saved passwords for the website on your computer (which you can also view with another click). From the Page Info dialog box you can also view all of the media embedded in the webpage, all of the meta tags in the HTML source code for the page, any RSS feeds on the page, and the permissions in effect for the page.
  • Optional automatic phishing and malware protection – Two options in the “Security” tab of the Options dialog box, “Tell me if the site I’m visiting is a suspected attack site” and “Tell me if the site I’m visiting is a suspected forgery,” allow Firefox to automatically protect users from malware (attack sites) and phishing scams (forgery sites). When either of these options is enabled, Firefox automatically checks the URL of the page you’re visiting against a list of reported phishing and/or malware sites that it downloads in the background every 30 minutes. If you navigate to a page on one of these lists, Firefox will double-check that the URL is on the list by sending a cookie to google.com, who maintains the lists of identified malware and phishing sites used by Firefox. The anti-phishing site aspect of this feature is equivalent to Internet Explorer’s SmartScreen Filter.

Conclusion

In terms of privacy, what makes Firefox unique compared to the other popular browsers is the extensive number of add-ons (also called “plug-ins” or “extensions”) designed to protect users’ privacy. Google’s Chrome browser does not currently support third-party add-ons but plans to do so in an upcoming release. Microsoft’s Internet Explorer does support extensions, and Microsoft has a website devoted to cataloging those extensions, but offers nothing like the variety and complexity of the add-ons available for Firefox. The two most popular Firefox add-ons (in terms of total downloads; currently second and fourth most popular in terms of weekly downloads) are specifically related to privacy. Adblock Plus (ABP) uses dynamically-updated “subscriptions” to maintain a list of unwanted third-party content and automatically  block that content from being displayed or run by Firefox. ABP can block Flash code, images, external scripts, stylesheets, frames, tracking cookies, webbugs, html elements, text ads, backgrounds, and any class, id, and any other HTML or CSS tag. By default, ABP allows all such elements unless they are blocked by a filter.  NoScript, by contrast, blocks all Java, JavaScript, Flash, and other plugins unless you explicitly allow them on a particular website  either (i) temporarily for your current session (until you close the browser); (ii) or permanently for all future sessions. Thus, with these two add-ons, Firefox offers security-conscious users a much more secure (and thus private) browsing environment than currently available in other browsers. We already covered Adblock Plus in a previous installment of our Privacy Solutions Series. We plan to cover NoScript and other popular Firefox add-ons such as TorButton and FoxyProxy in future installments.

Additional Reading / Links

]]>
https://techliberation.com/2009/03/16/privacy-solutions-part-4-firefox-privacy-features/feed/ 631 17401
Privacy Solutions (Part 3): Internet Explorer Privacy Features https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/ https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/#comments Fri, 06 Mar 2009 14:50:26 +0000 http://techliberation.com/?p=12538

By Adam Thierer, Berin Szoka, & Adam Marcus

IE logoAs noted in the first installment of our “Privacy Solution Series,” we are outlining various user-empowerment or user “self-help” tools that allow Internet users to better protect their privacy online-and especially to defeat tracking for online behavioral advertising purposes.  These tools and methods form an important part of a layered approach that we believe offers an effective alternative to government-mandated regulation of online privacy.

In some of the upcoming installments we will be exploring the privacy controls embedded in the major web browsers consumers use today: Microsoft’s Internet Explorer (IE) 8, the Mozilla Foundation’s Firefox 3, Google’s Chrome 1.0, and Apple’s Safari 4. In evaluating these browsers, we will examine three types of privacy features:

(1) cookie management controls; (2) private browsing; and (3) other privacy features

We will first be focusing on the default features and functions embedded in the browsers. We plan to do subsequent installments on the various downloadable “add-ons” available for browsers, as we already did for AdBlock Plus in the second installment of this series.

In this installment, we’ll be taking a look at the privacy-related features in the most popular browser in use today, Microsoft’s Internet Explorer. Specifically, we’ll be examining the most recent version of the browser, IE 8, Release Candidate 1. We’ll make it clear which features are new to IE 8 and those which are shared with IE 7.

Basic Background

Microsoft’s Internet Explorer browser was launched in 1995 and quickly became America’s most popular web browser, displacing Netscape’s Navigator browser. In recent years, IE has faced new challenges from the Mozilla Foundation’s “Firefox” browser, Apple’s “Safari”, the open source “Opera” browser, and others. (For an excellent history / timeline of web browsers, click here.) Despite these new challenges, IE still commands over 70% of the browser market. Like most other web browsers, Internet Explorer is free. So too are the features we are describing here.

Before we get further in the discussion of privacy controls, it’s important for readers to understand the difference between “first-party” and “third-party” content on webpages. Many webpages today contain a combination of content from many different websites, which enables powerful “Web 2.0” functionality like an interactive Google map displayed along with an address or a “Digg This” link in a blog post. Third-party content can also be used to track users across websites and to serve up advertising. All content loaded from the same domain as is displayed in the Address bar is first-party content. All content loaded from other domains is third-party content. Internet Explorer has a “Privacy Report” function that can show you the source for all the different content elements in the current webpage. To access it, select Webpage Privacy Policy from IE7’s Page menu or IE8’s View menu.

Basic Cookie Management Controls

To access Internet Explorer’s basic cookie management and privacy settings, open the “Tools” menu, click “Internet Options,” and then click on the “Privacy” tab to display the following options:

IE8 Internet Privacy Options

Users can configure the slider on the upper left-hand side of the window to establish their preferred level of cookie privacy. There are 6 options on the sliding scale from which to choose. Starting from the top of the slider bar:

(1)   ” Block all cookies” — Blocks IE from receiving any new cookies and blocks websites from reading any existing cookies on your computer. (Of course, that would greatly inconvenience users that regularly access websites that require information from the user, such as a Web-based email site that requires users to log in every time they access the website.)

(2)   ” High” — Blocks all cookies from websites that do not have a P3P compact privacy policy or that have a compact privacy policy which specifies that personally-identifiable information is used without your explicit consent. Cookies already on your computer can only be read by the site that created them.

(3)   ” Medium High” — “Blocks third-party cookies that do not have a compact privacy policy,” “Blocks third-party cookies that save information that can be used to contact you without explicit consent,” and “Blocks first-party cookies that save information that can be used to contact you without your implicit consent.”

(4)   ” Medium” — This setting “Blocks third-party cookies that do not have a compact privacy policy,” “Blocks third-party cookies that save information that can be used to contact you without your explicit consent,” and “Restricts first-party cookies that save information that can be used to contact you without your implicit consent.”

(5)   ” Low” — This setting “Blocks third-party cookies that do not have a compact privacy policy” and “Restricts third-party cookies that save information that can be used to contact you without implicit consent.”

(6)   ” Allow all cookies” — This setting allows all cookies from any website.

A P3P compact privacy policy is a machine-readable summary of the full P3P specification, which is a standardized method for explaining a website’s privacy policy. So when IE states that it will “block[] third-party cookies that save information that can be used to contact you without your explicit consent,” it means that the cookie will be blocked unless the site has a P3P compact privacy policy that either indicates that only non-identifiable (NOI) information is collected, or that for every data collection PURPOSE and every type of RECIPIENT that the website shares collected data with, the site’s policy is that the user must opt in (“explicitly consent”) to the practice.

When the slider bar is set anywhere other than the “High” and “Low” levels, users can also click the “Sites” button and then specify different cookie security levels for individual websites. The advantage of this approach is that it lets users create their own personal “white lists” and “black lists” of sites for which they either never want cookies blocked, or for which they always want cookies blocked. This increases the privacy-configurability of the browsing experience. For example, the following screen shows two sites that have been whitelisted and two hypothetical sites that have been blacklisted.

IE8 Per Site Privacy Actions

In addition, if the user wishes to manually delete their cookies, web browsing history, form data, personal passwords, or other stored information, they can do so on the “General” tab under the “Browsing History” section. Or, in the new IE 8, they can do so under the new “Safety” drop-down menu (in the Command toolbar) under the first option, “Delete Browser History.” They can also configure IE 8 so that all of this data is deleted each time the browser is closed (essentially converting “persistent cookies” into “session cookies,” concepts Adam Marcus has explained previously). The following screen shows how this user is choosing to delete just their temporary Internet files, cookies, and browsing history. Favorite websites are websites the user has bookmarked.

IE8 Delete Browsing History

Using these controls, a particularly privacy-sensitive user who only trusted two or three sites-say, their bank and their employer’s website-could allow cookies for only those sites and block cookies for all other websites. Again, this assumes that they do not mind the potential hassles associated with logging-in to many other sites each time they visit or losing custom preferences that would otherwise be stored in a cookie.

Advanced Cookie Management – “InPrivate Filtering”

Microsoft explains its InPrivate Filtering feature as follows:

Today websites increasingly pull content in from multiple sources, providing tremendous value to consumer and sites alike. Users are often not aware that some content, images, ads and analytics are being provided from third party websites or that these websites have the ability to potentially track their behavior across multiple websites. InPrivate Filtering provides users an added level of control and choice about the information that third party websites can potentially use to track browsing activity.

InPrivate Filtering is off by default and must be enabled on a per-session basis. To use this feature, select InPrivate Filtering from the Safety menu.

In “Automatically Block” mode, InPrivate Filtering will automatically block a site if IE finds that site’s content embedded in more than a user-specified number of other sites (the default is 10) visited by the user.  You can also manually control which sites are blocked, and import and export your list of white/blacklisted sites to share that list with others.

The beta version of IE8 included a subscriptions feature that would have allowed users to automatically receive updated white or blacklists from others-much like the subscription feature in AdBlock Plus that we discussed previously. However, this functionality was removed in the “Release Candidate 1” version of IE8 (released Jan. 26, 2009) for unspecified reasons.  While we recognize that not every beta feature makes it into final releases because of challenges in implementation, we very much hope Microsoft will ultimately add the subscription feature to Internet Explorer 8.  InPrivate Filtering goes a long way in empowering truly privacy-sensitive users to take more granular control over their own privacy, but a subscription feature would allow less sophisticated users to rely on groups or other individuals they trust to help them avoid specific sites according to their concerns about privacy or security.  Indeed, we hope that other browser manufacturers consider incorporating such tools into their browsers.  Perhaps the privacy advocates who currently focus on inventing one-size-fits-all regulatory or legislative solutions could channel their enthusiasm about user privacy into actually developing whitelists and blacklists.

Private Browsing

Another new privacy-related feature in Internet Explorer 8 is called InPrivate Browsing mode (akin to “Incognito” mode in Chrome), which protects so-called “over the shoulder” privacy, although that’s a somewhat misleading term. By not saving any record of your web browsing while InPrivate Browsing mode is turned on, this feature ensures that others with access to your computer will not know what websites you have accessed. Some people like being able to refer to their browser history and don’t want to delete all of their cookies, but want to hide all traces of some of their browsing activities-such as shopping online for a surprise gift, searching for information about a medical condition you don’t want to disclose and, most obviously, enjoying pornography).

When the InPrivate Browsing mode is enabled, none of the varieties of “browsing history” data is saved-but none of your previous history is deleted, either. This comes in handy because, if someone with direct access to your computer is monitoring your browser history to see what you’ve been up to, deleting all of your browsing history would suggest that you’ve been doing something you wanted to hide. But InPrivate Browsing mode allows you to surf anonymously when desired-without making it obvious that you’re doing so. Parents who are concerned about their kids using the InPrivate Browsing mode can use the parental controls in Windows Vista to disable it. But there does not appear to be a way to disable InPrivate Browsing on Windows XP.

Below is a screenshot of the InPrivate Browsing mode-which, again, can be enabled by clicking on the new “Safety” drop-down menu in IE 8 and selecting “InPrivate Browsing.”

IE8 InPrivate Browsing

While InPrivate Browsing is active, the following takes place:

  • New cookies are not stored:
    • All new cookies become “session” cookies
    • Existing cookies can still be read
    • The new DOM storage feature behaves the same way
    • New entries will not be saved to the browsing history
  • New temporary Internet files will be deleted when the Private Browsing window is closed
  • The following data will not be stored:
    • Form data
    • Passwords
    • Addresses typed into the address bar
    • Queries entered into the search box
    • Visited links

Other Privacy Features

  • SmartScreen Filter – Called “Phishing filter” in IE 7, this feature monitors and blocks links to malicious downloads. In IE 8, it also monitors links distributed via email and instant messaging (assuming IE is the default Web browser).
  • Cross Site Scripting (XSS) filter – Cross-site scripting attacks allow hackers to “inject” malicious scripts into trusted websites, which can then steal the account credentials of users who access these websites. XSS attacks are dangerous because everything looks fine to users and the attackers can gain almost complete access to users’ computers. The XSS filter in IE constantly scans the data received from websites to determine if there is a likely XSS attack and re-writes the data to neutralize the attack.
  • ActiveX Opt-In – By default, ActiveX Opt-In disables most ActiveX controls. When a Web page tries to run an ActiveX control, the following text is displayed in an Information Bar: “This website wants to run the following add-on ‘ABC Control’ from ‘XYZ Publisher.’ If you trust the website and the add-on and want to allow it to run, click here …” The user can then choose whether or not to run the ActiveX control.
  • Per-Site ActiveX – If a website tries to access an installed ActiveX control that is not permitted to run on the website, this new feature in IE 8 gives the user the option of blocking the attempt, allowing the ActiveX control for the current site, or to allow all websites to access the ActiveX control.
  • Domain Highlighting – The domain name of the site you’re viewing is highlighted in the address bar. By making it clearer to the user which website they’re accessing, this feature serves to protect users against phishing attacks from domain names that look like trusted domain names (e.g., www.paypal.com.hax0r.net, which is not PayPal’s actual website).

Additional Reading / Links

]]>
https://techliberation.com/2009/03/06/privacy-solutions-series-part-3-internet-explorer-privacy-features/feed/ 615 12538
Nuts & Bolts: Everything You Wanted To Know About Cookies But Were Afraid To Ask https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/ https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/#comments Tue, 27 Jan 2009 12:25:06 +0000 http://techliberation.com/?p=12932

As a means of introducing myself to TLF readers, this is an article that I wrote for the PFF blog in September that has not been previously mentioned on the TLF. Most of my other PFF blog posts have been cross-posted by Adam Thierer or Berin Szoka, but I’ve taken ownership of those posts so they appear on my TLF author page.

This is the first in a series of articles that will focus directly on technology instead of technology policy. With an average age of 57, most members of Congress were at least 30 when the IBM PC was introduced in 1981. So it is not surprising that lawmakers have difficulty with cutting-edge technology. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed, but no insult to the reader’s intelligence is intended.

This article focuses on cookies–not the cookies you eat, but the cookies associated with browsing the World Wide Web. There has been public concern over the privacy implications of cookies since they were first developed. But to understand them , you must know a bit of history.

According to Tim Berners Lee, the creator of the World Wide Web, “[g]etting people to put data on the Web often was a question of getting them to change perspective, from thinking of the user’s access to it not as interaction with, say, an online library system, but as navigation th[r]ough a set of virtual pages in some abstract space. In this concept, users could bookmark any place and return to it, and could make links into any place from another document. This would give a feeling of persistence, of an ongoing existence, to each page.”[1. Tim Berners-Lee, Weaving The Web: The Original Design and Ultimate Destiny of the World Wide Web. p. 37. Harper Business (2000).] The Web has changed quite a bit since the early 1990s.

Today, websites are much more dynamic and interactive, with every page being customized for each user. Such customization could include automatically selecting the appropriate language for the user based on where they’re located, displaying only content that has been added since the last time the user visited the site, remembering a user who wants to stay logged into a site from a particular computer, or keeping track of items in a virtual shopping cart. These features are simply not possible without the ability for a website to distinguish one user from another and to remember a user as they navigate from one page to another. Today, in the Web 2.0 era, instead of Web pages having persistence (as Berners-Lee described), we have dynamic pages and “user-persistence.”

This paper describes the various methods websites can use to enable user-persistence and how this affects user privacy. But the first thing the reader must realize is that the Web was not initially designed to be interactive; indeed, as the quote above shows, the goal was the exact opposite. Yet interactivity is critical to many of the things we all take for granted about web content and services today.

Stateful Sessions

On the original World Wide Web designed by Berners-Lee (Web 1.0), Web servers responded to each client request without relating that request to previous requests. There was no need to remember what other pages the user had requested because the requests were for static pages. But if you’ve used a Web-based email system like Gmail, Hotmail, Yahoo! Mail, etc., you know that once you log in, the service remembers who you are as you click from message to message. When a website can keep track of a user as they move from page to page within a site it is called a “stateful session.” The website doesn’t necessarily need to know anything about the user, it just needs to be able to distinguish that particular user from all other users. For example, if you go to an online store and place a few items in your virtual shopping cart, the site still does not know your name, email address, or billing information. But it does know what you’ve placed in your cart–or more precisely, it knows what someone using your browser has placed placed in a particular cart. If you leave the site before buying anything and then go back an hour later, it’s possible that the site will have completely forgotten about you. In that case, the unique identifier persists during your “session” on the site, but it doesn’t persist between sessions.

URLs and HTTP Requests

Web 1.0 sites achieve Web page persistence by having a unique address or Uniform Resource Locator (URL) for each Web page, which is displayed in the address bar at the top of your browser as you browse the web. For example, http://www.pff.org/about/ is a simple URL pointing to a specific Web page. Every user that visits the PFF site at www.pff.org and clicks on the “About” link will be taken to the exact same page.

URLs can also store information about the user. For example, if you search for “test” on Google, the URL of the resulting page may look like the following: http://www.google.com/search?q=test&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a.[2. http://googlesystem.blogspot.com/2006/07/meaning-of-parameters-in-google-query.html] The URL contains a number of different pieces of data, separated by ampersands. There is the search query (“q=test”), the character encoding of the input (“ie=utf-8”), the character encoding of the output (“oe=utf-8”), the type and language of the client (“rls=org.mozilla:en-US:official”), and the Web browser used (“client=firefox-a”). None of this information can be used to uniquely identify the user, but this basic example illustrates how URLs can be used to specify more than simply static Web pages–and how some information can be remembered as a user navigates a website even without using cookies. Knowing how this works, you can create your own advanced searches or change the way the results are formatted (e.g., changing the language).

So how did Google know I speak English and use Firefox? That information is included in the HTTP request that my Web browser sends to the Google Web server when it requests a page. HTTP requests specify (among a few other more technical things) the desired language and a “User-Agent” field that includes the name of the browser and sometimes your operating system. This information allows websites to customize their content for different Web browsers (e.g., to ensure that it displays properly). HTTP requests also include your IP address so the Web server knows where to send its response, and geotagging allows Web servers to associate an IP address with a geographic area (though the area is rarely more accurate than the country or state). HTTP requests can also contain HTTP cookies.

HTTP Cookies

URLs can be used to uniquely identify individual users and allow stateful sessions, but unless a user bookmarks the URL containing their unique identifier, there is no way for the site to associate the same unique identifier with the same user on subsequent visits. Another option is to have users create an account and then log in each time they access the site. The website could then include the user’s unique ID in the URL on subsequent pages, so that the user only needs to log in once per session. Having to bookmark or create an account on every site you want to remember you would quickly become unmanageable. It would be nice if mapping and weather websites, for example, just remembered your location. It would be nice if the blogs you follow remembered what post you last read and displayed only unread posts when you next visit their site. What was needed at this point in the Web’s evolution was a way for websites to automatically store a unique identifier on the user’s computer and send it back to the website automatically[3. A site could also try to uniquely identify users by the IP address of their computer, but this is unreliable as there can be many computers behind a firewall sharing a single IP address.]—which is precisely what a cookie does.

To quote Wikipedia,

“HTTP cookies, or more commonly referred to as Web cookies, tracking cookies or just cookies, are parcels of text sent by a server to a Web client (usually a browser) and then sent back unchanged by the client each time it accesses that server. HTTP cookies are used for authenticating, session tracking (state maintenance), and maintaining specific information about users, such as site preferences or the contents of their electronic shopping carts.”

A cookie can contain one or more pieces of data, a description and/or URL for an online description of the cookie, how long the Web browser should store the cookie, and the domain, path, and port that the cookie should be limited to. Cookies can be set to expire after a specified interval, or can be “session cookies” that will expire when the Web browser is closed. When a cookie expires, it is deleted by the Web browser. Unexpired cookies are automatically sent back to the originating Web server when the Web browser makes any subsequent requests to the same server (the same domain, path, and port).

Neither Web servers nor Web browsers are required to support cookies, but a server may refuse to work with a Web browser that does not return the cookie(s) it sends. Cookies do not contain any executable code and are extremely small in size. They only contain data sent by the website and the data is not changed by the client computer, so there generally should be no privacy concerns about sending a cookie back to the website that created it (“First-party cookies”).

First-Party and Third-Party Cookies

Cookies are normally only sent to the server setting them or a server in the same domain ( e.g., a cookie set by mail.google.com could be shared with calendar.google.com). These are called first-party cookies because they’re set by the site displayed in the address bar of the Web browser. These cookies are typically used to tailor the website for the user. Third-party cookies, on the other hand, are typically used by advertising networks to track users across multiple Web sites where the networks have placed advertising–which allows the advertising network to target subsequent advertisements to the user’s presumed interests and also to limit the number of times a user is shown a particular ad. This targeting allows the delivery of “smarter” advertising that is less annoying and more informative to the user–and therefore more valuable to the advertiser, who will be willing to pay websites more for their ad space. However, this targeting also raises privacy concerns.

It is trivial for a Web page to contain images or other components stored on servers in other domains (“third-party elements”). In fact, it is often easier to link to an image already hosted online elsewhere than it is to host an image on your own Website.

Examples:

  • Typical first-party embedded image:
  • Typical third-party embedded image:

Whenever a Web browser loads a Web page or component of a Web page, it will include in its request for that component any cookies already stored on the user’s computer that are associated with the domain hosting the content. The Web server, in turn, can send a cookie or update a cookie already existing on the user’s computer.

Although your Web browser will not send a third-party cookie to the first-party Web server (and it won’t send a first-party cookie to the third-party Web server), the first-party Web server can send information to the third-party Web server by embedding it in the URL for the third-party content. The most common form of this communication between the sites you visit and the sites they rely on for content or ads is called a “web bug”–a small (usually 1 pixel by 1 pixel) graphic not meant to be noticed by the user. Its purpose is to cause the user’s Web browser to load the third-party embedded content from the external Web server, which will allow the third party (usually an advertising network) to track the user.

  • Example third-party embedded web bug:

While this all may seem scary and invasive,the fact that a website or ad network can uniquely identify your browser does not mean that they have any clue who you are. Even if you provide your name, email address, or other personally-identifiable information to the first-party Web site, most sites’ privacy policies state that they will not share this information with their advertising partners. To use a real-world analogy, third-party advertising is equivalent to a marketer in a mall watching you come out of a music store and then offering you a flyer for a concert: The marketer may know that you’re interested in music (because you were shopping at the music store), but they have no idea who you are. And as my colleagues Adam Thierer and Berin Szoka explained in their post on Adblock Plus, websites (especially smaller independent websites) depend on advertising as a source of revenue and to cover their overhead costs.

Alternatives to Cookies

Cookies are not the only way websites can do stateful sessions. As has already been mentioned, Websites can put unique identifiers in URLs. But custom URLs don’t last between sessions. Websites that need to remember users ( e.g., websites that charge a fee for access) can require users to create an account and log into the site every time they use it.

But most websites do not require users to create an account and log in every time. And more and more users are configuring their Web browsers to delete all cookies when they close the browser. In response, Web site operators have found other methods to uniquely identify users by storing a unique identifier on users’ computers.

The cookie alternatives listed below are not any more or less invasive of privacy than cookies if the user is aware of them and manages them the same way they manage cookies. But most Web browsers don’t give users the same amount of control over cookie alternatives that they do over cookies, and few users know about these alternatives.

Per-session cookie alternatives – These cookie alternatives are not saved to disk and thus are not accessible after you close your Web browser.

  • Hidden form fields – Web pages can contain hidden Web forms that submit data back to the Web server when an on-screen button is pressed. This method is quite limited because it requires the user to click a specific button, and there is no method for saving data after you’ve navigated away from the site. Beyond these limitations, the only way to detect hidden form fields is to inspect the HTML code for a page. There is also no easy way to block hidden form fields.
  • window.name – JavaScript embedded in a Web page can set or read the this internal value that’s not really used for anything else. The value can be up to 32 megabytes in size and once set a value can be accessed by any Web site. Although the only way to detect this is to inspect the HTML code for a page, you can disable JavaScript.

Persistent cookie alternatives – These cookie alternatives are like cookies in that they are saved on your computer and can be accessed even after you’ve closed your Web browser.

  • Flash Cookies – Also known as Local Shared Objects, Flash cookies require Adobe Flash to be installed on your computer. Whereas HTTP cookies are limited to 4 kilobytes, Flash cookies can contain up to 100 kilobytes by default and can contain an unlimited amount of data if the user desires. To view and delete the Flash cookies stored on your computer, go to this page (although accessed via a Web page, the Flash cookies shown are stored on your computer). You can also permanently disable Flash cookies on that page.
  • DOM Storage – DOM storage was designed specifically to allow Web 2.0 applications to work offline, saving data locally when they are unable to access the host website and to save data that would otherwise be lost if a page is accidentally reloaded. DOM storage is currently only implemented in Firefox (and Internet Explorer 8 Beta). If cookies are disabled, DOM storage is also disabled. Users can also manually disable DOM storage even when cookies are enabled.
  • userData behavior – The userData behavior does for Internet Explorer what DOM storage does for Firefox. Each “document” is limited to 128 kilobytes of storage, with a per-domain limit of 1024 kilobytes. The data is stored in Internet Explorer’s cache and are deleted when you delete cookies using the Delete Browsing History dialog box.

Conclusion

This article should give you a better sense of what cookies are used for and how they work. You should now see that per-session cookies and cookie alternatives are completely harmless. Persistent cookies (and cookie alternatives) can make your Web browsing a bit easier, but deleting them will not (in most cases) cause any problems. If you are concerned about your privacy, you will need to do a bit more than just delete cookies–you also need to delete or disable the above-mentioned cookie alternatives.

]]>
https://techliberation.com/2009/01/27/nuts-and-bolts-everything-you-wanted-to-know-about-cookies-but-were-afraid-to-ask/feed/ 16 12932
Privacy Solutions (Part 2): Adblock Plus https://techliberation.com/2008/09/08/privacy-solutions-series-part-2-adblock-plus/ https://techliberation.com/2008/09/08/privacy-solutions-series-part-2-adblock-plus/#comments Mon, 08 Sep 2008 21:42:25 +0000 http://techliberation.com/?p=12419

By Adam Thierer & Berin Szoka

The goal of our “Privacy Solution Series,” as we noted in the first installment, is to detail the many “technologies of evasion” (i.e., user-empowerment or user “self-help” tools) that allow web surfers to better protect their privacy online—and especially to defeat tracking for online behavioral advertising purposes.  These tools and methods form an important part of a layered approach that, in our view, provides an effective alternative to government-mandated regulation of online privacy.

In this second installment in this series, we will highlight Adblock Plus (ABP), a free downloadable extension for the Firefox web browser (as well as for the Flock browser, though we focus on the Firefox version here).

Adblock Plus

Purpose: The primary purpose of Adblock Plus is to block online ads from being downloaded and displayed on a user’s screen as they browse the Web.  In a broad sense, this functionality might be considered a “privacy” tool by those who consider it an intrusion upon, or violation of, their “privacy” to be “subjected” to seeing advertisements as they browse the web.  But if one thinks of privacy in terms of what others know about you, Adblocking is not so much about “privacy” as about user annoyance (measured in terms of distracting images cluttering webpages or simply in terms of long download times for webpages).  In this sense, ABP may not qualify as a “technology of evasion,” strictly speaking.  But, as explained below the fold, ABP does allow its users to “evade” some forms of online tracking by blocking the receipt of some, but not all, tracking cookies.

Cost: Like almost all other Firefox add-ons, both the ABP extensions and the filter subscriptions on which it relies (as described below) are free.

Popularity / Adoption: While there are a wide variety of ad-blocking tools available, Adblock Plus is far and away the leader.  ABP has proven enormously popular since its release in November 2005 as the successor to Adblock, which was first developed in 2002 and reached over 10,000,000 downloads before being abandoned by its developer and even today garners nearly 40,000 downloads a week.  This history of Adblock provides further details.

ABP was named one the 100 best products of 2007 by PC World magazine and is now the #1 most downloaded add-on for Firefox with over 500,000 weekly downloads, up significantly for just a few months.  In a blog post last month, ABP creator Wladimir Palant estimated that “no more than 5% of Firefox users have Adblock Plus installed,” but that percentage is bound to grow larger as more people discover Adblock.  As one indicator of ABP’s popularity, the number of Google searches for “Adblock” has nearly eclipsed the number of searches for “identity theft,” which seems like a far more serious concern than having to look at web ads.

Of course, not every Firefox user would chose to use Adblock even if they were aware of it.  For example, one of us (Berin) finds it indispensable and leaves it on all the time.  The other (Adam) almost never turns it on, preferring to see what sort of ads are being served on each page he visits.  For those users primarily concerned with having their browsing tracked, there are other tools more effective than ABP for that purpose, as future entries in this series will describe.

This raises a point we make in our upcoming paper on online advertising and privacy:  Internet users all have different preferences and sensitivities when it comes to ads and online privacy.   Some of us find ads annoying, intrusive, and potentially privacy-violating.  Others of us just don’t care or even find some informational benefit in seeing them—especially when they are tailored to our particular interests.  Fortunately, tools like Adblock Plus let us each decide for ourselves what sort of browsing experience and privacy protections to use—rather than relying on the heavy, clumsy hand of Big Government to impose sweeping regulations that make a one-size-fits-all determination for everyone.

How Adblock Plus Works: Adblock Plus on its own offers nothing more than the capability to filter certain elements (images, external scrips, frames, Flash, etc.) sent to the user’s computer when they attempt to download the contents of a webpage.  Unbeknownst to many users, the HTML code of most webpages includes instructions to download images and other content (such as ads) stored on that website or on third party sites.  ABP does not recognize ad images as such, so it cannot automatically distinguish ads from non-ad content.  Instead, ABP relies on a blacklist of terms that the keeper of the list has determined correspond to parts of a URL used to load ads.  The following screenshot illustrates how ABP works:

The user here (Berin) subscribed to EasyList USA, the most commonly-used U.S. “filter” (blacklist + whitelist) when he first installed AdBlock.  (Additional filter subscriptions are available here.)  The “filter rules” are ranked by “Hits” or number of ads blocked since the filter was installed (in May 2008).  Shown here are only the top examples of effective filters, such as any URL that begins with “http://ad.” or contains “/ads/”.  Also shown here are three custom ad filters created by Berin.  This clip (click on “Show me how this is done”) illustrates how users can block images to create their own custom ad filter.  Last, the green text is just the most commonly-applied filter rule contained in EasyList’s white list of terms that should not be blocked, trumping black list filters.  For example, htttp://wikimedia.org/wikipedia/ads/… would normally be blocked because of the “/ads/” filter rule in the blacklist, but the green white list filter rule in our example trumps that rule to make sure that all URLs containing “htttp://*.wikimedia.org/wikipedia” (where * is a wild card operator) will not be blocked.

As mentioned above, ABP can block the downloading of some tracking cookies by preventing the user’s computer from attempting to download an element (usually an image) associated with that cookie—called “web bugs” or “web beacons.”  As Wikipedia explains:

Originally, a Web bug was a small (usually 1×1 pixel) transparent GIF or PNG image (or an image of the same colour of the background) that was embedded in an HTML page, usually a page on the Web or the content of an e-mail. Modern Web bugs also use the HTML IFrame, style, script, input link, embed, object, and other tags to track usage. Whenever the user opens the page with a graphical browser or e-mail reader, the image or other information is downloaded. This download requires the browser to request the image from the server storing it, allowing the server to take notice of the download. As a result, the organization running the server is informed when the HTML page has been viewed.

Larger Implications: As you can imagine, advertising networks and advertisers are less than thrilled about the idea of users blocking their ads, but it is website operators that have thus far objected most strongly to ad-blocking, because it threatens what is for many websites the only source of revenue.  Even amateur sites that do not have to pay for content production often rely on advertising revenue to cover other costs, such as hosting.  It’s not hard to imagine why many site operators might want to discourage or thwart ad-blocking to maintain the quid pro quo of the online economy:  Users get free content and services from websites in exchange for looking at advertising, which websites can sell through ad networks to advertisers.  This dilemma is not unique to the online world, of course.  In the offline context, television advertisers have responded to ad-skipping via DVRs through increasing reliance on product placement.

But because web-browsing is an essentially interactive experience between the user’s browser and the website, website operators may have greater leverage in the relationship with a user who wants to block ads.  In particular, the website may be able to detect the use of ABP, at least indirectly through the pattern of page element blocking caused by ABP’s use. (Prior to June 2008, websites could directly detect whether a browser was using ABP by noticing the presence of an API interface designed to allow ABP to work with other extensions, but this feature was removed in a recent update to ABP.)

Thus, once adblocking rises above a certain “acceptable loss” threshold, a website could respond in at least three distinct ways:

  1. Moral exhortation – websites might display this kind of pop-up notice to ABP users:

  2. “Blocking” adblocking – Because ABP’s relies on relatively crude keyword filters to distinguish ad elements of a page from content elements, websites can confuse these filters by making advertisements less easily distinguishable from content.  On the one hand, websites might attempt to “embed” advertisements a la television product placement.  On the other, we may see ad networks rely more on distributing ads through websites directly, rather than from ad network servers, so that adblocking filters cannot easily identify ads by the source referenced in their URL.
  3. Tying website functionality to the acceptance of tracking cookies – As mentioned above, Adblock will block some “tracking cookies” by blocking the downloading from ad network servers of web beacons—which is often how such cookies are placed on the uer’s computer in the first place.   By requiring the downloading of those cookies to access the full functionality of the site, websites might be able to require users to accept tracking cookies in exchange for full access to the site.

As is so often the case, this will likely result in a war of “spy v. spy,” whereby the user community develops better evasive measures, and the websites community develops better countermeasures, and so on–as illustrated in this scene from the 1998 Marky Mark cult-classic film, The Big Hit: (Warning: Includes foul language).

http://www.youtube.com/v/xJ0FSQF7cGk&hl=en&fs=1

Related Reading & Links

]]>
https://techliberation.com/2008/09/08/privacy-solutions-series-part-2-adblock-plus/feed/ 16 12419
Privacy Solutions (Part 1): Introduction https://techliberation.com/2008/09/05/privacy-solutions-series-part-1-introduction/ https://techliberation.com/2008/09/05/privacy-solutions-series-part-1-introduction/#comments Fri, 05 Sep 2008 16:23:36 +0000 http://techliberation.com/?p=12376

By Adam Thierer & Berin Szoka

Whatever ordinary Americans actually think about online privacy, it remains a hot topic inside the Beltway. While much of that amorphous concern focuses on government surveillance and government access to information about web users, many in Washington have focused on targeted online advertising by private companies as a dire threat to Americans’ privacy — and called for prophylactic government regulation of an industry that is expected to more than double in size to $50.3 billion in 2011 from $21.7 billion last year.

In 1998, when targeted advertising was in its infancy, the FTC proposed four principles as the basis for self-regulation of online data collection: notice, choice, access & security. In 2000, the Commission declared that too few online advertisers adhered to these principles and therefore recommended that Congress mandate their application in legislation that would allow the FTC to issue binding regulations. Subsequent legislative proposals (indexed by CDT by Congress here along with other privacy bills) have languished in Congress ever since. During this time self-regulation of data collection (e.g., the National Advertising Initiative) has matured, the industry has flourished without any clear harm to users and the FTC has returned to its original support for self-regulation over legislation or regulatory mandates.

But over the last year, the advocates of regulation have succeeded in painting a nightmarish picture of all-invasive snooping by online advertisers using more sophisticated techniques of collecting data for targeted advertising. The Federal Trade Commission (FTC) has responded cautiously by proposing voluntary self-regulatory guidelines intended to address these concerns, because the agency recognizes that this growing revenue stream is funding the explosion of “free” (to the user) online content and services that so many Americans now take for granted, and that more sophisticated targeting produces ads that are more relevant to consumers (and therefore also more profitable to advertisers).

The Hill has responded by holding hearings, sending out angry letters to online advertisers, and demanding that ISPs cease experimenting with a new form of online behavioral advertising (OBA) based on packet inspection. Some in the think tank community have cheered this on, demanding draconian regulation. But before rushing to regulate — and potentially choking the economic engine fueling “free” online content and services — policymakers should be asking whether alternatives to command-and-control regulation can adequately address privacy concerns.

We are in the process of penning a major study on this debate, which will challenge those who are calling for regulation to:

(1) Show us the harm or market failure.

(2) Prove to us that no less restrictive alternative to regulation exists.

(3) Explain to us how the benefits of regulation outweigh the costs.

It is that second point that we would like to focus more on in a series of upcoming (and likely ongoing) blog entries.
Building on the excellent work of our TLF colleague Ryan Radia, we plan to detail the many “technologies of evasion” (i.e, empowerment or user “self-help” tools) that allow web surfers to better protect their privacy online — and especially to defeat tracking for OBA purposes. These tools and methods form an important part of a layered approach that, in our view, provides an effective alternative to government-mandated regulation. Such an approach would also include user education, self-regulatory schemes like the National Advertising Initiative, and FTC enforcement of privacy policies.

Before one can determine the true necessity for government intervention (and, indeed, its constitutionality), one must understand the availability, sophistication and convenience of the technologies of evasion we will describe. In an important 2001 Cato Institute paper, our TLF colleague Tom Bell argues that web surfers must bear some of the responsibility for protecting themselves online, just as they do with regards to potentially objectionable (i.e., “indecent”) online content:

Digital self-help makes unnecessary state action limiting speech that is indecent or harmful to minors. The same argument applies to state action that would limit speech by commercial entities about Internet users. Digital self-help offers more hope of protecting Internet users’ privacy than it does of effectively filtering out unwanted speech, and the availability of such self-help casts doubt on the constitutionality of legislation restricting speech by commercial entities about Internet users. From the more general point of view of policy, moreover, digital self-help offers a better approach to protecting Internet privacy than does state action.

What Bell means is that the digital “self-help” tools that consumers rely on to protect themselves or their children from objectionable content must always confront the subjective problems associated with defining what is indecent or obscene. Thus, even though Internet filtering tools and other parental controls can generally offer a very effective means of blocking access to objectionable content, at the margins there will always be definitional controversies. By contrast, the privacy self-help tools we will describe are much more likely to provide an effective shield because those consumers who are truly sensitive about their online privacy can make far more definitive choices about allowing or disallowing cookies or certain types of personal information from being collected/tracked for targeted advertising purposes.

Finally, Bell correctly notes that “digital self-help” is more likely to be effective than regulatory solutions for a variety of reasons–not least of which is the fact that truly “bad” actors on the Internet are rarely stopped or even discouraged by regulation from doing bad things online where we are talking about the pure exchange of bits (as opposed to purchases or shipments) because they can generally continue their activities from off-shore. In such cases, technical means are the only way of stopping such activities.

We invite you to share examples of technologies of evasion with us as we go along.
And we hope that our TLF colleagues might chime in with entries of their own as they find examples of privacy-enhancing technologies that privacy-conscious web surfers can employ to take privacy in their own hands.

– Adam Thierer & Berin Szoka

]]>
https://techliberation.com/2008/09/05/privacy-solutions-series-part-1-introduction/feed/ 10 12376