Privacy Solutions

Anupam Chander, Director of the California International Law Center and Martin Luther King, Jr. Hall Research Scholar at the UC Davis School of Law, discusses his recent paper with co-author Uyen P. Lee titled The Free Speech Foundations of Cyberlaw. Chander addresses how the first amendment promotes innovation on the Internet; how limitations to free speech vary between the US and Europe; the role of online intermediaries in promoting and protecting the first amendment; the Communications Decency Act; technology, piracy, and copyright protection; and the tension between privacy and free speech.

Download

Related Links

Reason.org has just posted my commentary on the five reasons why Federal Trade Commission’s proposals to regulate the collection and use of consumer information on the Web will do more harm than good.

As I note, the digital economy runs on information. Any regulations that impede the collection and processing of any information will affect its efficiency. Given the overall success of the Web and the popularity of search and social media, there’s every reason to believe that consumers have been able to balance their demand for content, entertainment and information services with the privacy policies these services have.

But there’s more to it than that. Technology simply doesn’t lend itself to the top-down mandates. Notions of privacy are highly subjective. Online, there is an adaptive dynamic constantly at work. Certainly web sites have pushed the boundaries of privacy sometimes. But only when the boundaries are tested do we find out where the consensus lies.

Legislative and regulatory directives pre-empt experimentation. Consumer needs are best addressed when best practices are allowed to bubble up through trial-and-error. When the economic and functional development of European Web media, which labors under the sweeping top-down European Union Privacy Directive, is contrasted with the dynamism of the U.S. Web media sector which has been relatively free of privacy regulation – the difference is profound.

An analysis of the web advertising market undertaken by researchers at the University of Toronto found that after the Privacy Directive was passed, online advertising effectiveness decreased on average by around 65 percent in Europe relative to the rest of the world. Even when the researchers controlled for possible differences in ad responsiveness and between Europeans and Americans, this disparity manifested itself. The authors go on to conclude that these findings will have a “striking impact” on the $8 billion spent each year on digital advertising: namely that European sites will see far less ad revenue than counterparts outside Europe.

Other points I explore in the commentary are:

  • How free services go away and paywalls go up
  • How consumers push back when they perceive that their privacy is being violated
  • How Web advertising lives or dies by the willingness of consumers to participate
  • How greater information availability is a social good

The full commentary can be found here.

 

Do-Not-Track is not inconceivable itself. It’s like the word “inconceivable” in the movie The Princess Bride. I do not think it means what people think it means—how it is meant to work and how it is likely to offer poor results.

Take Mike Swift’s reporting for MercuryNews.com on a study showing that online advertising companies may continue to follow visitors’ Web activity even after those visitors have opted out of tracking.

“The preliminary research has sparked renewed calls from privacy groups and Congress for a ‘Do Not Track’ law to allow people to opt out of tracking, like the Do Not Call list that limits telemarketers,” he writes.

If this is true, it means that people want a Do Not Track law more because they have learned that it would be more difficult to enforce.

That doesn’t make sense … until you look at who Swift interviewed for the article: a Member of Congress who made her name as a privacy regulation hawk and some fiercely committed advocates of regulation. These people were not on the fence before the study, needless to say. (Anne Toth of Yahoo! provides the requisite ounce of balance, but she defends her company and does not address the merits or demerits of a Do-Not-Track law.)

Do-Not-Track is not inconceivable. But the study shows that its advocates are not conceiving the complexities and drawbacks of a regulatory approach rather than individually tailored blocking of unwanted tracking, something any Internet user can do right now using Tracking Protection Lists.

Social widgets, such as the now-ubiquitous Facebook “Like” button and Twitter “Tweet” button, offer users a convenient way to share online content with their friends and followers. These widgets have recently come under scrutiny for their privacy implications. Yesterday, The Wall Street Journal reported that Facebook, Twitter, and Google are informed each time a user visits a webpage that contains one of the respective company’s widgets:

Internet users tap Facebook Inc.’s “Like” and Twitter Inc.’s “Tweet” buttons to share content with friends. But these tools also let their makers collect data about the websites people are visiting. These so-called social widgets, which appear atop stories on news sites or alongside products on retail sites, notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons, according to a study done for The Wall Street Journal.

It wasn’t exactly a secret that social widgets “phone home.” However, the Journal’s story shed new light on how the firms that offer social widgets handle the data they glean regarding user browsing habits. Facebook and Google reportedly store this data for a limited period of time — two weeks and 90 days, respectively — and, importantly, the data isn’t recorded in a way that can be tied back to a user (unless, of course, the user affirmatively decides to “like” a webpage). Twitter reportedly records browsing data as well, but deletes it “quickly.”

Assuming the companies effectively anonymize the data they glean from their social widgets, privacy-conscious users have little reason to worry. I’m not aware of any evidence that social widget data has been misused or breached. However, as Pete Warden reminded us in an informative O’Reilly Radar essay posted earlier this week, anonymizing data is harder than it sounds, and supposedly “anonymous” data sets have been successfully de-anonymized on several occasions. (For more on the de-anonymization of data sets, see Arvind Narayanan and Vitaly Shmatikov’s 2008 research paper on the topic).

Continue reading →

I spaced out and completely forget to post a link here to my latest Forbes column which came out over the weekend.  It’s a look at back at last week’s hullabaloo over “Apple, The iPhone, and a Locational Privacy Techno-Panic.” In it, I argue:

Some of the concerns raised about the retention of locational data are valid. But panic, prohibition and a “privacy precautionary principle” that would preemptively block technological innovation until government regulators give their blessings are not valid answers to these concerns. The struggle to conceptualize and protect privacy rights should be an evolutionary and experimental process, not one micro-managed at every turn by regulation.

I conclude the piece by noting that:

Public pressure and market norms also encourage companies to correct bone-headed mistakes like the locational info retained by Apple.  But we shouldn’t expect less data collection or less “tracking” any time soon.  Information powers the digital economy, and we must learn to assimilate new technology into our lives.

Read the rest here. And if you missed essay Larry Downes posted here on the same subject last week, make sure to check it out.

Inspired by thoughtful pieces by Mike Masnick on Techdirt and L. Gordon Crovitz’s column yesterday in The Wall Street Journal, I wrote a perspective piece this morning for CNET regarding the European Commission’s recently proposed “right to be forgotten.”

A Nov. 4th report promises new legislation next year “clarifying” this right under EU law, suggesting not only that the Commission thinks it’s a good idea but, even more surprising, that it already exists under the landmark 1995 Privacy Directive.

What is the “right to be forgotten”?  The report is cryptic and awkward on this important point, describing “the so-called ‘right to be forgotten’, i.e. the right of individuals to have their data no longer processed and deleted when they [that is, the data] are no longer needed for legitimate purposes.”

Continue reading →

I participated last week in a Techdirt webinar titled, “What IT needs to know about Law.”  (You can read Dennis Yang’s summary here, or follow his link to watch the full one-hour discussion.  Free registration required.)

The key message of  The Laws of Disruption is that IT and other executives need to know a great deal about law—and more all the time.  And Techdirt does an admirable job of reporting the latest breakdowns between innovation and regulation on a daily basis.  So I was happy to participate.

Legally-Defensible Security

Not surprisingly, there were far too many topics to cover in a single seminar, so we decided to focus narrowly on just one:  potential legal liability when data security is breached, whether through negligence (lost laptop) or the criminal act of a third party (hacking attacks).  We were fortunate to have as the main presenter David Navetta, founding partner with The Information Law Group, who had recently written an excellent article on what he calls “legally-defensible security” practices.

Continue reading →

At the FTC’s second Exploring Privacy roundtable at Berkeley in January, many of the complaints about online advertising centered on how difficult it was to control the settings for Adobe’s Flash player, which is used to display ads, videos and a wide variety on other graphic elements on most modern webpages, as well the potential for unscrupulous data collectors to “re-spawn” standard (HTTP) cookies even after a user deleted them simply by referencing the Flash cookie on a user’s computer from that domain—thus circumventing the user’s attempt to clear out their own cookies. Adobe to the first criticism by promising to include better privacy management features in Flash 10.1 and by condemning such re-spawning and calling for “a mix of technology tools and regulatory efforts” to deal with the problem (including FTC enforcement). (Adobe’s filing offers a great history of Flash, a summary of its use and an introduction to Flash Cookies, which Adam Marcus detailed here.)

Earlier this week (and less than three weeks later), Adobe rolled out Flash 10.1, which offers an ingenious solution to the problem of how to manage flash cookies: Flash now simply integrates its privacy controls with Internet Explorer, Firefox and Chrome (and will soon do so with Safari). So when the user turns on “private browsing mode” in these browser, the Flash Cookies will be stored only temporarily, allowing users to use the full functionality of the site, but the Flash Player will “automatically clear any data it might store during a private browsing session, helping to keep your history private.” That’s a pretty big step and an elegantly simple to the problem of how to empower users to take control of their own privacy. Moreover:

Flash Player separates the local storage used in normal browsing from the local storage used during private browsing. So when you enter private browsing mode, sites that you previously visited will not be able to see information they saved on your computer during normal browsing. For example, if you saved your login and password in a web application powered by Flash during normal browsing, the site won’t remember that information when you visit the site under private browsing, keeping your identity private.

Continue reading →

By Eric Beach and Adam Marcus

In the previous entry in the Privacy Solutions Series, we described how privacy-sensitive users can use proxy servers to anonymize their web browsing experience, noting that one anonymizer stood out above all others: Tor, a sophisticated anonymizer system developed by the Tor Project, a 501(c)(3) U.S. non-profit venture supported by industry, privacy advocates and foundations, whose mission is to “allow you to protect your Internet traffic from analysis.” The Torbutton plug-in for Firefox makes it particularly easy to use Tor and has been downloaded over three million times. The TorBrowser Bundle is a pre-configured “portable” package of Tor and Firefox that can run off a USB flash drive and does not require anything to be installed on the computer on which it is used. Like most tools in the Privacy Solutions series, Tor has its downsides and isn’t for everyone. But it does offer a powerful tool to privacy-sensitive users in achieving a degree of privacy that no regulation could provide.

Continue reading →

By Eric Beach & Adam Marcus

Among Internet users, there are a variety of concerns about privacy, security and the ability to access content. Some of these concerns are quite serious, while others may be more debatable. Regardless, the goal of this ongoing series is to detail the tools available to users to implement their own subjective preferences. Anonymizers (such as Tor) allow privacy-sensitive users to protect themselves from the following potential privacy intrusions:

  1. Advertisers Profiling Users. Many online advertising networks build profiles of likely interests associated with a unique cookie ID and/or IP address. Whether this assembling of a “digital dossier” causes any harm to the user is debatable, but users concerned about such profiles can use an anonymizer to make it difficult to build such profiles, particularly by changing their IP address regularly.
  2. Compilation and Disclosure of Search Histories. Some privacy advocates such as EFF and CDT have expressed legitimate concern at the trend of governments subpoenaing records of the Internet activity of citizens. By causing thousands of users’ activity to be pooled together under a single IP address, anonymizers make it difficult for search engines and other websites–and, therefore, governments–to distinguish the web activities of individual users.
  3. Government Censorship. Some governments prevent their citizens from accessing certain websites by blocking requests to specific IP addresses. But an anonymizer located outside the censoring country can serve as an intermediary, enabling the end-user to circumvent censorship and access the restricted content.
  4. Reverse IP Hacking. Some Internet users may fear that the disclosure of their IP address to a website could increase their risk of being hacked. They can use an anonymizer as an intermediary between themselves and the website, thus preventing disclosure of their IP address to the website.
  5. Traffic Filtering. Some ISPs and access points allocate their Internet bandwidth depending on which websites users are accessing. For example, bandwidth for information from educational websites may be prioritized over Voice-over-IP bandwidth. Under certain circumstances, an anonymizer can obscure the final destination of the end-user’s request, thereby preventing network operators or other intermediaries from shaping traffic in this manner. (Note, though, that to prevent deep packet inspection, an anonymizer must also encrypt data).

Continue reading →