Republished from the Daily Caller
U.K. Prime Minister David Cameron has declared “everything necessary will be done to restore order” in Britain’s riot-racked cities. With respect to the right honorable gentleman, what distinguishes free from unfree societies is not order, but ordered liberty. As the great Tory philosopher Edmund Burke taught, reconciling liberty and order is the fine art of democratic statecraft. Tweaking that balance as technology evolves requires the most careful and judicious deliberation. Only where cooler heads prevail can ordered liberty thrive.
Cameron’s government has hesitated to escalate physical force with rubber bullets and water cannons, lest they lend moral sanction to the brutal tactics used by China and in the Middle East to suppress dissent. Yet however noble his intentions, Cameron could do more to undermine ordered liberty with “bloodless” measures targeting social media services like Twitter and Facebook, and improperly using photo identification.
Cameron, who championed Internet-driven revolutions in Egypt and Tunisia, told Parliament that the “free flow of information can be used for good, but it can also be used for ill.” His vague response: “We are working with the police, the intelligence services and industry whether it will be right to stop people communicating via these websites and services.”
So far, the only clear call for shutting down social media outright came from a Labour MP, not Cameron’s Tories. David Lammy, who represents the London neighborhood where rioting began, has demanded the suspension of BlackBerry Messenger (BBM) service for “helping rioters outfox Police.” Such a response befits Beijing, not Britain, the birthplace of ordered liberty.
Free societies can and should silence those who incite acts of violence — but not by shutting down speech platforms for all users. Even America’s speech-protective First Amendment allows punishment of speech that is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” That standard protects legitimate expression without preventing prosecution of those individuals stoking and organizing riots. The same standard should determine when government may properly force social media systems to take down seditious posts, photos and videos. Continue reading →
A month ago, Rep. Mary Bono Mack introduced a bill (and staff memo) “To protect consumers by requiring reasonable security policies and procedures to protect data containing personal information, and to provide for nationwide notice in the event of a security breach.” These are perhaps the two least objectionable areas for legislating “on privacy” and there’s much to be said for both concepts in principle. Less clear-cut is the bill’s data minimization requirement for the retention of personal information.
But as I finally get a chance to look at the bill on the eve of the July 20 Subcommittee markup, I note one potentially troubling procedural aspect of the bill: giving the FTC authority to redefine PII without the procedural safeguards that normally govern the FTC’s operations. The scope of this definition would be hugely important in the future, both because of the security, breach notification and data minimization requirements attached to it, and because this definition would likely be replicated in future privacy legislation—and changes in to this term in one area would likely follow in others. Continue reading →
Adam Thierer has already provided an excellent overview of the Supreme Court’s decision in Brown v. Entertainment Merchants Association, striking down a California law requiring age verification and parental consent for the purchase of “violent” videogames by minors. It’s worth calling attention to two key aspects of the decision.
First, the Supreme Court has clearly affirmed that the First Amendment applies equally to all media, including videogames and other interactive media. The Court has, in the past, often accorded lesser treatment to new media, as Cato’s excellent amicus brief explains [pp 3-15]. This approach, if applied consistently by the Court in the future, will ensure that free speech continues to be protected even as technology evolves in ways scarcely imaginable today.
Second, the Court correctly rejected California’s attempt to justify governmental paternalism as a supplement for parental responsibility [Brown at 15-17]. The existing content rating system and parental controls in videogame consoles already empower parents to make decisions about which games are appropriate for their children and their values. As in the Sorrell decision handed down last week, the Court has rejected what amounts to an opt-in mandate—this time, in favor of letting parents “opt-out” of letting their kids play certain games or rating levels rather than requiring that they “opt-in” to each purchase. This is the recurring debate about media consumption—from concerns over violent or offensive speech to those surrounding privacy. And once again, speech regulation must yield to the less-restrictive alternatives of empowerment and education.
Both these points were at the heart of the amicus brief I filed with the Supreme Court in this case last fall (press release), along with Adam (my former Progress & Freedom Foundation colleague) and Electronic Frontier Foundation Staff Attorney Lee Tien and Legal Director Cindy Cohn. Here’s the summary of our argument in that brief, which provides as concise an overview of our reasoning as we could manage, broken down into separate bullets with quotations referencing the Court’s decision on that point. As you’ll see, the Court’s decision reflected all our arguments except for one, which the Court’s decision did not reach. Continue reading →
The Supreme Court yesterday handed down a 6-3 decision in Sorrell v. IMS Health Inc. striking down a Vermont law restricting marketing to doctors based on their past history of writing drug prescriptions. The law required that doctors opt in before drug companies could use data about their prescription patterns to market (generally name-brand) drugs to them.
I’ve been closely following this case, having filed TechFreedom amicus curiae brief with the Supreme Court earlier this year, written by First Amendment expert litigator Richard Ovelmen, and previously joined with other free speech groups in an amicus brief before the Second Circuit. Our media statement on the Supreme Court brief provides a pretty concise summary of our views and what’s at stake in this case, and Jane Yakowitz’s initial blog reactions are especially worth reading.
The lopsided decision should surprise no one: Vermont’s law was a brazen effort to suppress speech disfavored by the state based on the paternalist assumption that name-brand drug marketing is “too effective.” In essence, the Court has reaffirmed the core meaning of the First Amendment: government must trust the marketplace of ideas unless fraud or deception occurs. Anyone who takes the First Amendment seriously should be roused to applaud when Justice Kennedy writes, for the majority, that “fear that speech might persuade provides no lawful basis for quieting it.” Clearly, this principle is as true for commercial advertising as for any form of speech. I’m particularly glad to see that Justice Sotomayor joined in this decision.
This is just the latest in a line of cases upgrading protection for commercial speech stretching back over 30 years since Central Hudson and including Lorillard (2001) and 44 Liquormart (1996). But the opinion will also surely be remembered as the beginning another line of cases that attempt to guide lawmakers trying to protect legitimate privacy interests without suppressing speech. The First Circuit, upholding a similar law, had previously deemed prescriber-identifying information “as a mere ‘commodity’ with no greater entitlement to First Amendment protection than “beef jerky.'” But the Supreme Court rejected this, unequivocally declaring that “information is speech,” including both its creation and dissemination, even while recognizing the privacy problems raised by the “capacity of technology to find and publish personal information.” Continue reading →
Facebook announced yesterday that it had finished most of the global roll-out, begun in the U.S. last December. Now ZDNet reports that European Privacy regulators are already planning a probe of this. Emil Protalinski writes:
“Tags of people on pictures should only happen based on people’s prior consent and it can’t be activated by default,” Gerard Lommel, a Luxembourg member of the so-called Article 29 Data Protection Working Party, told BusinessWeek. Such automatic tagging “can bear a lot of risks for users” and the group of European data protection officials will “clarify to Facebook that this can’t happen like this.”
No doubt our friends at the Extra-Paternalist Internet Cops (EPIC) will jump into the fray with another of their many complaints to the FTC, dripping with outrage that Facebook has “opted us into” this feature. But what’s the big deal, really? Emil explains how things work:
When you upload new photos, Facebook uses software similar to that found in many photo editing tools to match your new photos to other photos you’re tagged in. Similar photos are grouped together and, whenever possible, Facebook suggests the name(s) your friend(s) in the photos. In other words, the square that magically finds faces in a photo now suggests names of your Facebook friends to streamline the tagging process, especially with the same friends in multiple uploaded photos.
Lifehacker explains how easy it is for Facebook users to opt-out of having their friends seeing the automatically generated suggestion to tag their face (as Facebook did in its own announcement):
- Head your Privacy Settings and click on Customize Settings.
- Scroll down to the “Suggest Photos of Me to Friends” setting and hit “Edit Settings”.
- In the drop-down on the right, hit “Disable”.
See the screenshots here. So, in short: The feature that’s upsetting the privacy regulationistas is a feature that saves us time and effort in tagging our friends in photos we upload—unless our friends have opt-outed of having their photos auto-suggested.
Continue reading →
TechFreedom, CEI and ATR’s DigitalLiberty.net just put out the following statement about ECPA reform, something Ryan and I have blogged about here and here. Also check out the larger coalition letter we released in April with seven other leading free market groups and digitalfourthamendment.org.
* * *
WASHINGTON D.C. – Sen. Patrick Leahy (D-Vt.) today introduced legislation (S. 1011) to reform the Electronic Communications Privacy Act (ECPA). The law, enacted in 1986, was designed to protect individuals’ privacy by limiting governmental access to electronic data stored or sent using platforms or computers owned by third parties.
“Several lawmakers have proposed sweeping new regulation of how companies collect and use data to fund and improve the online content and services cherished by consumers,” said TechFreedom President Berin Szoka. “The costs to consumers of such regulations could be enormous, yet the harms supposedly justifying new regulations remain largely amorphous. Today, finally, we see a bill that focuses on the one clear harm that seems to underlie most online privacy concerns: law enforcement’s access to personal data without judicial scrutiny. Addressing that very real problem should unite everyone who cares about privacy.”
Sen. Leahy’s proposed legislation would amend ECPA to protect Americans’ private information stored remotely or in the “cloud” from unwarranted search and seizure, and limit unwarranted governmental access to mobile location information. The reforms would implement two of the four consensus principles advocated by the Digital Due Process coalition, a diverse coalition of public interest organizations, free market groups, high-tech companies, and scholars. Continue reading →
This morning, the U.S. Senate Judiciary Committee heard key administration officials testify about the statute that governs law enforcement access to private information held electronically by third parties. Several leading lawmakers are currently working to bring this law—the 1986 Electronic Communications Privacy Act (ECPA)—into the information age so that it reflects Americans’ reasonable privacy expectations in the era of webmail, mobile services, cloud computing and the like.
TechFreedom has led, in conjunction with the Competitive Enterprise Institute and Americans for Tax Reform‘s Digital Liberty Project, a coalition of leading free market public interest in a letter to the committee voicing their strong support for overhauling the quarter-century-old ECPA. The coalition—also including FreedomWorks, the Campaign for Liberty, the Washington Policy Center, Liberty Coalition, the Center for Financial Privacy and Human Rights, and Less Government—is urging Congress to extend traditional Fourth Amendment protections to Internet-based “cloud” and mobile location services while preserving the building blocks of law enforcement investigations.
The coalition letter explains that framers of the Bill of Rights ratified the Fourth Amendment to protect individuals from unreasonable, unwarranted searches and seizures by government officials. But since courts have not consistently applied these Constitutional protections to private information stored with cloud and mobile providers, many Americans’ private information is vulnerable to warrant-less access by law enforcement. To remedy this, the letter proposes four reforms to ECPA that would resolve legal ambiguities and affirm Constitutional protections by establishing electronic privacy standards that are consistent with the Fourth Amendment.
“Major decisions regarding the future architecture of cloud computing are being made right now,” explains the letter, calling for urgent action. “If Congress fails to enact ECPA reform, cloud computing services may be designed to rely on servers outside the U.S. Not only would this harm U.S. competitiveness, it could also, ironically, deny U.S. law enforcement access to cloud data—even with a lawful warrant.”
Read the full coalition letter here or below. Continue reading →
The FTC today announced it has reached a settlement with Google concerning privacy complaints about how the company launched its Buzz social networking service last year. The consent decree runs for a standard twenty-year term and provides that Google shall (i) follow certain privacy procedures in developing products involving user information, subject to regular auditing by an independent third party, and (ii) obtain opt-in consent before sharing certain personal information. Here’s my initial media comment on this:
For years, many privacy advocates have insisted that only stringent new regulations can protect consumer privacy online. But today’s settlement should remind us that the FTC already has sweeping powers to punish unfair or deceptive trade practices. The FTC can, and should, use its existing enforcement powers to build a common law of privacy focused on real problems, rather than phantom concerns. Such an evolving body of law is much more likely to keep up with technological change than legislation or prophylactic regulation would be, and is less likely to fall prey to regulatory capture by incumbents.
I’ve written in the past about how the FTC can develop such a common law. If the agency needs more resources to play this role effectively, that is what we should be talking about before we rush to the assumption that new regulation is necessary. Anyway, a few points about Part III of the consent decree, regarding the procedures the company has to follow:
- The company has to assess privacy risks raised by new products as well as existing products, much like data security assessments currently work. The company would have to assess, document and address privacy risks—and then subject those records to inspection by the independent auditor, who would determine whether the company has adequately studied and dealt with privacy risks.
- Google is agreeing to implement a version of Privacy by Design, in that the company will do even more to bake privacy features into its offerings.
- This is intended to avoid instances where the company makes a privacy blunder because it lacked adequate internal processes to thoroughly vet new offerings or simply to avoid innocent mistakes—as with the its inadvertent collection of content sent over unsecured Wi-Fi hotspots because the engineer designing its Wi-Fi mapping program mistakenly left that code in the system, even though it wasn’t necessary for what Google was doing. I wrote more on that here.
As to Part II of the consent decree, express affirmative consent for changes in the sharing of “identified information”: It’s well-worth reading Commissioner Rosch’s concurring statement. Continue reading →
National Journal reports that the Department of Commerce (NTIA) will, at a Senate Commerce Committee hearing today, call for a “consumer privacy bill of rights”—a euphemism for sweeping privacy regulation:
“Having carefully reviewed all stakeholder comments to the Green Paper, the department has concluded that the U.S. consumer data privacy framework will benefit from legislation to establish a clearer set of rules for the road for businesses and consumers, while preserving the innovation and free flow of information that are hallmarks of the Internet,” [NTIA Administrator Larry] Strickling said in his prepared testimony obtained by Tech Daily Dose.
In other words: “We’ve taken the time to think this through very carefully and have reluctantly come to the conclusion that regulation is necessary.” Sorry, but I’m just not buying it—not just the wisdom of the recommendation but the process that produced it. Let’s consider the timeline here:
- October 27, 2010 – NTIA Administrator Strickling announces Green Paper is coming but says nothing about timing and little about substance
- December 16, 2010 – NTIA/Commerce releases its Privacy Green Paper
- January 28, 2011 – deadline for public comments (28 non-holiday business days later)
- ??? – Commerce decides regulation is necessary
- March 16, 2011 – Commerce is ready to ask Congress for legislation (31 non-holiday business days later)
The Commerce Department gave the many, many interested parties the worst four weeks of the year—including Christmas, New Year’s and Martin Luther King Day—to digest and comment on an 88 page, ~31,000 tome of a report on proposed regulation of how information flows in our… well, information economy. Oh, and did I mention that those same parties had already been given a deadline of January 31, 2011 to comment on the FTC’s 122 page, ~34,000 word privacy report back on December 1 (too bad for those celebrating Hanukkah)? In fairness, the FTC did, on January 21, extend its deadline to February 18—but that hardly excuses the Commerce Department’s rush to judgment. Continue reading →
Few people have experienced just how oppressive “privacy” regulation can be quite so directly as Peter Fleischer, Google’s Global Privacy Counsel. Early last year, Peter was convicted by an Italian court because Italian teenagers used Google Video to host a video they shot of bullying a an autistic kid—even though he didn’t know about the video until after Google took it down.
Of course, imposing criminal liability on corporate officers for failing to take down user-generated content is just a more extreme form of the more popular concept of holding online intermediaries liable for failing to take down content that is allegedly defamatory, bullying, invasive of a user’s privacy, etc. Both have the same consequence: Given the incredible difficulty of evaluating such complaints, sites that host UGC will tend simply to take it down upon receiving complaints—thus being forced to censor their own users.
Now Peter has turned his withering analysis on the muddle that is Europe’s popular “Right to be Forgotten.” Adam noted the inherent conflict between that supposed “right” and our core values of free speech. It’s exactly the kind of thing UCLA Law Prof. Eugene Volokh had in mind when he asked what is your “right to privacy” but a right to stop me from observing you and speaking about you?” Peter hits the nail on the head:
More and more, privacy is being used to justify censorship. In a sense, privacy depends on keeping some things private, in other words, hidden, restricted, or deleted. And in a world where ever more content is coming online, and where ever more content is find-able and share-able, it’s also natural that the privacy counter-movement is gathering strength. Privacy is the new black in censorship fashions. It used to be that people would invoke libel or defamation to justify censorship about things that hurt their reputations. But invoking libel or defamation requires that the speech not be true. Privacy is far more elastic, because privacy claims can be made on speech that is true.
Continue reading →