Ladar Levison, founder of encrypted email service Lavabit, discusses recent government action that led him to shut down his firm. When it was suspected that NSA whistleblower Edward Snowden used Lavabit’s email service, the FBI issued a National Security Letter ordering Levison to hand over SSL keys, jeopardizing the privacy of Lavabit’s 410,000 users. Levison discusses his inspiration for founding Lavabit and why he chose to suspend the service; how Lavabit was different from email services like Gmail; developments in his case and how the Fourth Amendment has come into play; and his involvement with the recently-formed Dark Mail Technical Alliance.
Download
Related Links
I’m pleased to announce the release of my latest law review article, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.” It appears in the new edition of the George Mason University Law Review. (Vol. 20, No. 4, Summer 2013)
This is the second of two complimentary law review articles I am releasing this year dealing with privacy policy. The first, “The Pursuit of Privacy in a World Where Information Control is Failing,” was published in Vol. 36 of the Harvard Journal of Law & Public Policy this Spring. (FYI: Both articles focus on privacy claims made against private actors — namely, efforts to limit private data collection — and not on privacy rights against governments.)
My new article on benefit-cost analysis in privacy debates makes a seemingly contradictory argument: benefit-cost analysis (“BCA”) is extremely challenging in online child safety and digital privacy debates, yet it remains essential that analysts and policymakers attempt to conduct such reviews. While we will never be able to perfectly determine either the benefits or costs of online safety or privacy controls, the very act of conducting a regulatory impact analysis (“RIA”) will help us to better understand the trade-offs associated with various regulatory proposals. Continue reading →
I’m excited to announce the release of my latest law review article, “The Pursuit of Privacy in a World Where Information Control is Failing,” which appears in the next edition (vol. 36) of the Harvard Journal of Law & Public Policy. This is the first of two complimentary law review articles that I will be releasing this year dealing with privacy policy. The second, which will be published later this summer by the George Mason University Law Review, is entitled, “A Framework for Benefit-Cost Analysis in Digital Privacy Debates.” (FYI: Both articles focus on privacy claims made against private actors — namely, efforts to limit private data collection — and not on privacy rights against governments.)
The new Harvard Journal article is divided into three major sections. Part I focuses on some of normative challenges we face when discussing privacy and argues that there may never be a widely accepted, coherent legal standard for privacy rights or harms here in the United States. It also explores the tensions between expanded privacy regulation and online free speech. Part II turns to the many enforcement challenges that are often ignored when privacy policies are being proposed or formulated and argues that legislative and regulatory efforts aimed at protecting privacy must now be seen as an increasingly intractable information control problem. Most of the problems policymakers and average individuals face when it comes to controlling the flow of private information online are similar to the challenges they face when trying to control the free flow of digitalized bits in other information policy contexts, such as online safety, cybersecurity, and digital copyright.
If the effectiveness of law and regulation is limited by the normative considerations discussed in Part I and the practical enforcement complications discussed in Part II, what alternatives remain to assist privacy-sensitive individuals? I address that question in Part III of the paper and argue that the approach America has adopted to deal with concerns about objectionable online speech and child safety offers a path forward on the privacy front as well. Continue reading →
News about the Epsilon breach has spread relatively slowly. The breach of data held by an email service provider is bad—no question—but it’s not terribly consequential. Emails aren’t generally kept private.
But the Epsilon story may soon heat up. The presence of an email address on a list creates inferences about aspects of a person’s life that may be sensitive. So it is with GlaxoSmithKline’s lists related to prescriptions. As the Coalition Against Unsolicited Commercial Email points out, correlation between email addresses and interest in particular drugs makes spear-phishing attacks more potent. Fraudulent email that is tailored to a medication a person takes will have a higher uptake than average, and could be used to defraud people on matters relating to their health.
But is it helpful to exaggerate this serious threat? CAUCE titles its post: “Criminals Now Know What Prescriptions You Take.” Thought leaders like Jules Polenetsky have picked up that meme and run with it.
For people who are not data-literate, a likely implication of “criminals know what prescriptions you take” is that criminals have access to lists of the prescriptions they take. A person on ten different medications might think that criminals know each and every prescription he or she takes. That’s more frightening than knowing that an association between one or two prescriptions and an email address is available to criminals. (It’s possible that people have signed up for email relating to each of their prescriptions, all of which are from drug companies who use Epsilon as their email service provider, but I think it is unlikely and rare enough to treat as an irrelevant outlier.)
What criminals know is that people are on lists related to prescriptions. Many do take that prescription. Some used to take that prescription. Some have a loved one who takes it, some sell it, some prescribe it, and so on.
What’s the point of this observation? Not much. But under the rule of media and politics—“if it bleeds, it leads”—we may soon see a media and policy stampede. That stampede will treat an important security issue that deserves careful attention as a techno-cyber-apocalypse that demands immediate overreaction.
Most of you have probably already seen this but Pingdom recently aggregated and posted some amazing stats about “Internet 2009 In Numbers.” Worth checking them all out, but here are some highlights:
- 1.73 billion Internet users worldwide as of Sept 2009; 18% increase in Internet users since previous year.
- 81.8 million .COM domain names at the end of 2009; 12.3 million .NET & 7.8 million .ORG
- 234 million websites as of Dec 2009; 47 million were added in 2009.
- 90 trillion emails sent on the Internet in 2009; 1.4 billion email users worldwide.
- 26 million blogs on the Internet.
- 27.3 million tweets on Twitter per day as of Nov 2009.
- 350 million people on Facebook; 50% of them log in every day; + 500,000 active Facebook applications.
- 4 billion photos hosted by Flickr as of Oct 2009; 2.5 billion photos uploaded each month to Facebook.
- 1 billion videos served by YouTube each day; 12.2 billion videos viewed per month; 924 million videos viewed per month on Hulu in the US as of Nov 2009; + the average Internet user in the US watches 182 online videos each month.
And yet some people claim that digital generativity and online innovation are dead! Things have never been better.
What Unites Advocates of Speech Controls & Privacy Regulation? [pdf]
by Adam Thierer & Berin Szoka
The Progress & Freedom Foundation, Progress on Point No. 16.19
Anyone who has spent time following debates about speech and privacy regulation comes to recognize the striking parallels between these two policy arenas. In this paper we will highlight the common rhetoric, proposals, and tactics that unite these regulatory movements. Moreover, we will argue that, at root, what often animates calls for regulation of both speech and privacy are two remarkably elitist beliefs:
- People are too ignorant (or simply too busy) to be trusted to make wise decisions for themselves (or their children); and/or,
- All or most people share essentially the same values or concerns and, therefore, “community standards” should trump household (or individual) standards.
While our use of the term “elitism” may unduly offend some understandably sensitive to populist demagoguery, our aim here is not to launch a broadside against elitism as Time magazine culture critic William H. Henry once defined it: “The willingness to assert unyieldingly that one idea, contribution or attainment is better than another.”[1] Rather, our aim here is to critique that elitism which rises to the level of political condescension and legal sanction. We attack not so much the beliefs of some leaders, activists, or intellectuals that they have a better idea of what it in the public’s best interest than the public itself does, but rather the imposition of those beliefs through coercive, top-down mandates.
That sort of elitism—elitism enforced by law—is often the objective of speech and privacy regulatory advocates. Our goal is to identify the common themes that unite these regulatory movements, explain why such political elitism is unwarranted, and make it clear how it threatens individual liberty as well as the future of free and open Internet. As an alternative to this elitist vision, we advocate an empowerment agenda: fostering an environment in which users have the tools and information they need to make decisions for themselves and their families. Continue reading →
In episode #44 of “Tech Policy Weekly,” Berin Szoka and Adam Thierer engage in a debate with Internet security expert Chris Soghoian, who is a student fellow at the Berkman Center for Internet & Society at Harvard University. He is also a Ph.D. candidate at Indiana University’s School of Informatics.
Chris is an up-and-coming star in the field of cyberlaw and technology policy as he has quickly made a name for himself in debates over privacy policy, data security, and government surveillance. He straddles the line between academic and activist, and the role he often plays in many tech policy debates is somewhat akin to what Ralph Nader has done in many other fields through the years. Except, in this case, instead of “Unsafe at Any Speed” it’s more like “Unsafe at Any Setting,” since Chris is often raising a stink about what he regards as unjust or unreasonable privacy or security settings that various online websites or service providers use.
On the show, Chris talks about two of his recent crusades to get certain online providers to change their default settings to improve user security or privacy: (1) His effort this week to get major email providers—and Google in particular—to change their default security settings on their email offerings; and (2) his earlier crusade to create permanent opt-out cookies to stop behavioral advertising by advertising networks.
There are several ways to listen to today’s TLF Podcast. You can press play on the player below to listen right now, or download the MP3 file. You can also subscribe to the podcast by clicking on the button for your preferred service. (And do us a favor, Digg this podcast!)
[display_podcast]
Continue reading →
This is just a quick reminder to both faithful and fair-weather readers that there are many ways to keep up with what we’re saying here at the Technology Liberation Front, including:
(1) RSS
Subscribe in a Reader
(2) Twitter
(3) Facebook
(4) Daily email alert
(5) Podcast
…
Or just make the TLF your browser’s welcome page! What better way to start each day?
Finally, as always, we appreciate your support, attention, tolerance of our rants.