First Amendment & Free Speech

The Stop Online Piracy Act (SOPA), a controversial bill before the House of Representatives aimed at combating “rogue websites,” isn’t just about criminal, foreign-based sites that break U.S. intellectual property laws with impunity. Few dispute that these criminal websites that profit from large-scale counterfeiting and copyright infringement are a public policy problem. SOPA’s provisions, however, extend beyond these criminal sites, and would potentially subject otherwise law-abiding Internet intermediaries to serious legal risks.

Before moving forward with rogue websites legislation, it’s crucial that lawmakers take a deep breath and appreciate the challenges at stake in legislating online intermediary liability, lest we endanger the Nozickian “utopia of utopias” that is today’s Internet. The unintended consequences of overbroad, carelessly drafted legislation in this space could be severe, particularly given the Internet’s incredible importance to the global economy, as my colleagues have explained on these pages (123456)

To understand why SOPA could be a game-changer for online service providers, it’s important to understand the simmering disagreement surrounding the Digital Millennium Copyright Act (DMCA) of 1998, which grants certain online service providers a safe harbor from liability for their users’ copyright infringing actions. In exchange for these protections, service providers must comply with the DMCA’s notice-and-takedown system, adopt a policy to terminate users who repeatedly infringe, and meet several other conditions. Service providers are only eligible for this safe harbor if they act to expeditiously remove infringing materials upon learning of them. Also ineligible for the safe harbor are online service providers who turn a blind eye to “red flags” of obvious infringement.

The DMCA does not, however, require providers to monitor their platforms for infringing content or design their services to facilitate monitoring. Courts have held that a DMCA-compliant service provider does not lose its safe harbor protection if it fails to act upon generalized knowledge that its service is used for many infringing activities, in addition to lawful ones, so long as the service provider does not induce or encourage users’ infringing activities.

Defenders of the DMCA safe harbor argue that it’s helped enable America’s Internet-based economy to flourish, allowing an array of web businesses built around lawful user-generated content — including YouTube, Facebook, and Twitter — to thrive without fear of copyright liability or burdensome monitoring mandates.

Conversely, some commentators, including UCLA’s Doug Lichtman, argue that the DMCA inefficiently tips the scales in favor of service providers, to the detriment of content creators — and, ultimately, consumer welfare. Pointing to a series of court rulings interpreting the safe harbor’s provisions, critics argue that the DMCA gives online intermediaries little incentive to do anything beyond the bare minimum to stop copyright infringement. Critics further allege that the safe harbor has been construed so broadly that it shields service providers that are deliberately indifferent to their users’ infringing activities, however rampant they may be.

What does SOPA have to do with all of this? Buried in the bill’s 78 pages are several provisions that run a very real risk of effectively sidestepping many of the protections conferred on online service providers by the DMCA safe harbor.

Continue reading →

This week I will again be attending the Family Online Safety Institute’s excellent annual summit. The 2-day affair brings together some of the world’s leading experts on online safety and privacy issues. It’s a great chance to learn about major developments in the field. As I was preparing for the session I am moderating on Thursday, I thought back to the first FOSI annual conference, which took place back in 2007. What is remarkable about that period compared to now is that there was a flurry of legislative and regulatory activity related to online child safety then that we simply do not see today.

In fact, just 3 1/2 years ago, John Morris of the Center for Democracy and Technology and I compile a legislative index [summary here] that cataloged the more than 30 legislative proposals that had been introduced in the the 110th session of Congress. There was also a great deal of interest in these issues within the regulatory community. Finally, countless state and local measures related to online safety and speech issues had been floated. Today, by contrast, it is hard for me to find any legislative measures focused on online safety regulation at the federal level, and I don’t see much activity at the agency level either. I haven’t surveyed state and local activity, but it seems like it has also died down.

Generally speaking, I think this is a good development since I am opposed to most proposals to regulate online speech, expression, or conduct. But let’s ignore the particular wisdom of such measures and ask a simple question: What explains the decline in Internet safety legislation and online content regulation? I believe there are three possible explanations: Continue reading →

I highly recommend this important new study on “Why Parents Help Their Children Lie to Facebook about Age: Unintended Consequences of the Children’s Online Privacy Protection Act” by danah boyd of New York University, Eszter Hargittai from Northwestern University, Jason Schultz from University of California, Berkeley, and John Palfrey from Harvard University. COPPA is a complicated and somewhat open-ended law and regulatory regime. COPPA requires that commercial operators of websites and services obtain “verifiable parental consent” before collecting, disclosing, or using “personal information” (name, contact inform­ation) of children under the age of 13 if either their website or service (or “portion thereof”) is “directed at children” or they have actual knowledge that they are collecting personal information from a child.

The new study, which surveyed over 1,000 parents of children between the ages of 10 and 14, reveals that, despite the best of intentions, COPPA is having many unintended costs and consequences:

Although many sites restrict access to children, our data show that many parents knowingly allow their children to lie about their age — in fact, often help them to do so — in order to gain access to age–restricted sites in violation of those sites’ ToS. This is especially true for general–audience social media sites and communication services such as Facebook, Gmail, and Skype, which allow children to connect with peers, classmates, and family members for educational, social, or familial reasons.

The authors conclude that “COPPA inadvertently undermines parents’ ability to make choices and protect their children’s data” and that their results “have significant implications for policy–makers, particularly in light of ongoing discussions surrounding COPPA and other age–based privacy laws.” Indeed, this paper could really shake up the debate over online kids’ privacy regulation. I will have more analysis of the paper in my weekly Forbes column this weekend.

Additional reading for COPPA background and current controversies: Berin Szoka & Adam Thierer, “COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech,” (May 21, 2009); and Adam Thierer, “Kids, Privacy, Free Speech & the Internet: Finding the Right Balance,” (August 12, 2011).

This afternoon the Stop Online Piracy Act (H.R. 3261) was introduced by Rep. Lamar Smith of the House Judiciary Committee. This bill is a companion to the PROTECT IP Act and S.978, both of which were reported by the Senate Judiciary Committee in May.

There’s a lot some to like about the bill, but I’m uneasy about some quite a few of its provisions. While I’ll have plenty to say about this bill in the future, for now, here are a few preliminary thoughts:

  • The bill’s definition of “foreign infringing sites” at p. 10 borrows heavily from 18 U.S.C. § 2323, covering any site that commits or facilitates the commission of criminal copyright infringement and would be subject to civil forfeiture if it were U.S.-based. Unfortunately, the outer bounds of 18 U.S.C. § 2323 are quite unclear. The statute, which was enacted only a few years ago, encompasses “any property used, or intended to be used, in any manner or part to commit or facilitate” criminal copyright infringement. While I’m all for shutting down websites operated by criminal enterprises, not all websites used to facilitate crimes are guilty of wrongdoing. Imagine a user commits criminal copyright infringement using a foreign video sharing site similar to YouTube, but the site is unaware of the infringement. Since the site is “facilitating” criminal copyright infringement, albeit unknowingly, is it subject to the Stop Online Piracy Act?
  • Section 103 of the bill, which creates a DMCA-like notification/counter-notification regime, appears to lack any provision encouraging ad networks and payment processors to restore service to a site allegedly “dedicated to theft of U.S. property” upon receipt of a valid counter-notification and when no civil action has been brought. The DMCA contains a safe harbor protecting service providers who take reasonable steps to take down content from liability, but the safe harbor only applies if service providers promptly restore allegedly infringing content upon receipt of a counter notification and when the rights holder does not initiate a civil action. Why doesn’t H.R. 3261 include a similar provision?
  • The bill’s private right of action closely resembles that found in the PROTECT IP Act. Affording rights holders a legal avenue to take action against rogue websites makes sense, but I’m uneasy about creating a private right of action that allows courts to issue such broad preliminary injunctions against allegedly infringing sites. I’m also concerned about the lack of a “loser pays” provision.
  • Section 104 of the bill, which provides immunity for entities that take voluntary actions against infringing sites, now excludes from its safe harbor actions that are not “consistent with the entity’s terms of service or other contractual rights.” This is a welcome change and alleviates concerns I expressed about the PROTECT IP Act essentially rendering certain private contracts unenforceable.
  • Section 201 of the bill makes certain public performances via electronic means a felony. The section contains a rule of construction at p. 60 that clarifies that intentional copying is not “willful” if it’s based on a good faith belief with a reasonable basis in law that the copying is lawful. Could this provision cause courts to revisit the willfulness standard discussed in United States v. Moran, in which a federal court found that a defendant charged with criminal copyright infringement was not guilty because he (incorrectly) thought his conduct was permitted by the Copyright act?

Twenty years ago, one of the best books ever penned about freedom of speech was released. Sadly, many people still haven’t heard of it. That book was Freedom, Technology and the First Amendment, by Jonathan Emord. With the exception of Ithiel de Sola Pool’s 1983 masterpiece Technologies of Freedom: On Free Speech in an Electronic Age, no book has a more profound impact on my thinking about free speech and technology policy than Emord’s 1991 classic. Emord’s book is, at once, a magisterial history and a polemical paean. This is no wishy-washy apologia for free speech, rather, it is a celebration of the amazing gift of freedom that the Founding Fathers gave us with the very first amendment to our constitution.

Unlike most people, Emord assumes nothing about the nature and purpose of the First Amendment; instead, he starts in pre-colonial times and explains how our rich heritage of freedom of speech and expression came about. Like Pool, Emord also makes the case for equality of all press providers and debunks the twisted logic behind much of this century’s corrupt jurisprudence governing speech transmitted via electronic media. Pool and Emord make it clear that if the First Amendment is retain its true meaning and purpose as a bulwark against government control of speech and expression, electronic media providers (TV, radio, cable, the Internet) must be accorded full First Amendment freedoms on par with traditional print media (newspapers, magazines, books and journals). Continue reading →

A year ago, I filed a joint amicus brief with the Electronic Frontier Foundation urging the Supreme Court to overturn California’s paternalistic law on the dangerous grounds that videogame depictions of violence constituted “obscenity” unprotected by the First Amendment. Fortunately, we won. Thus, the First Amendment protects all media, while parents have a variety of tools available to them to limit what content their kids can consume, or games they can play.

But in case you’re wondering what the world might look like had the decision gone the other way, check out the contrast between the US version of Maroon 5’s hit song “Misery” and the UK version. First, here’s the (raucous and sexy) US version:

Now, here’s the UK version, where the sexually suggestive parts remain (kids love that stuff) but all the “violent” parts have been replaced with, or covered by, ridiculous cartoon images. Really, it’s just too funny. The best part is where the knife she uses to stab the gaps between his fingers on the table has been replaced with a cartoon ice cream cone. Don’t try that at home, kids—you’ll make a chocolatey mess! Continue reading →

[NOTE: The following is a template for how to script congressional testimony when invited to speak about online safety issues.]

Mr. Chairman and members of the Committee, thank you inviting me here today to testify about the most important issue to me and everyone in this room: Our children.

There is nothing I care more about than the future of our children. Like Whitney Houston, “I believe the children are our future.”

Mr. Chairman, I remember with fondness the day my little Johnny and Jannie came into this world. They were my little miracles. Gifts from God, I say. At the moment of birth, my wife… oh, well, I could tell you all about it someday but suffice it to say it was a beautiful scene, with the exception of all the amniotic fluid and blood everywhere. I wept for days.

Today my kids are (mention ages of each) and they are the cutest little angels on God’s green Earth. (NOTE: At this point it would be useful for you to hold up a picture of your kids, preferably with them cuddling with cute stuffed animals, a kitten, or petting a pony as in the example below. Alternatively, use a picture taken at a major attraction located in the Chairman’s congressional district.) Continue reading →

On Wednesday afternoon, it was my great pleasure to make some introductory remarks at a Family Online Safety Institute (FOSI) event that was held at the Yahoo! campus in Sunnyvale, CA. FOSI CEO Stephen Balkam asked me to offer some thoughts on a topic I’ve spent a great deal of time thinking about in recent years: Who needs parental controls? More specifically, what role do parental control tools and methods play in the upbringing of our children? How should we define or classify parental control tools and methods? Which are most important / effective? Finally, what should the role of public policy be toward parental control technologies on both the online safety and privacy fronts?

In past years, I spent much time writing and updating a booklet on these issues called Parental Controls & Online Child Protection: A Survey of Tools & Methods. It was an enormous undertaking, however, and I have abandoned updating it after I hit version 4.0. But that doesn’t mean I’m not still putting a lot of thought into these issues. My focus has shifted over the past year more toward the privacy-related concerns and away from the online safety issues. Of course, all these issues intersect and many people now (rightly) considered them to largely be the same debate.

Anyway, to kick off the FOSI event, I offered three provocations about parental control technologies and the state of the current debate over them. I buttressed some of my assertions with findings from a recent FOSI survey of parental attitudes about parental controls and online safety. Continue reading →

Yesterday, the Federal Trade Commission (FTC) released its long-awaited proposed revisions to the Children’s Online Privacy Protection rule (the “COPPA Rule”). Below I offer a few brief thoughts on the draft document. My remarks assume a basic level of knowledge about COPPA so that I don’t have to spend pages explaining the intricacies of this complex law and regulatory regime. If you need background on the COPPA law and rule, please check out this paper by Berin Szoka and me: “COPPA 2.0: The New Battle over Privacy, Age Verification, Online Safety & Free Speech.”

Dodging the COPA / Mandatory Age Verification Bullet

The most important takeaway from yesterday’s proposal involves something the FTC chose not to do: They agency very wisely decided to ignore some requests to extend the coverage of COPPA’s regulatory provisions from children under 13 all the way up to teens up to 18.  An effort to expand COPPA’s “verifiable parental consent” requirements to all teens would have raised thorny First Amendment issues as well as a host of practical enforcement concerns.  In essence, it would have required Internet-wide age verification of children and adults in order to ensure that everyone was exactly who they claimed to be online. We already had an epic decade-long legal battle over that issue when the constitutionality of the Children’s Online Protection Act (COPA), another 1998 law sometimes confused with COPPA, was tested many times over and always found to be in violation of the First Amendment.

Regardless, the FTC didn’t go there yesterday, so this concern is off the table for now. The agency deserves credit for avoiding this constitutional thicket. Continue reading →

Republished from The Mark News

Privacy advocates are attacking Google again, this time for requiring that field-testers of its new, invite-only Google+ social network use “the names they commonly go by in the real world.” After initially suspending Google+ accounts flagged as pseudonymous, Google has clarified that such users will be given four days to add their real names to their profiles. Users who don’t like the policy can export all data they’ve put into Google+ and leave.

Cyber-sociologist Danah Boyd calls “real name” policies “an authoritarian assertion of power … [by] privileged white Americans … over vulnerable people [like] abuse survivors, activists, LGBT people, women, and young people.” In 2003, she denounced the “Fakester genocide” perpetrated by Friendster, the first major “real name” social network. Facebook later faced similar criticism from her and others for its purge of “Fakebookers” – those using fake names on the popular social network.

Boyd and others are right that anonymity can be “a shield from the tyranny of the majority,” as the U.S. Supreme Court has said while striking down laws requiring speakers to identify themselves. But, like the rest of the First Amendment, the right to anonymous speech limits government, not private actors. In other words, while the First Amendment bars government from forcing us to identify ourselves, those who sign up for Google+ must play by Google’s rules.

Boyd wants to regulate social-media giants as public utilities, but – unlike government bans – we can opt out of these services. Google and Facebook merely offer trusted communities that compete with sites like Twitter, where pseudonyms thrive alongside real names. With over 200 million users, Twitter has met the very demand Boyd cites –but she’s not satisfied.

As a gay activist myself, I’m sympathetic to her privacy concerns. But, as much as I respect Boyd, I find her obsession with “privilege” unhelpful. The engineers who design new social-networking tools may indeed tend to under-value the concerns of particularly privacy-sensitive users or groups. But their critics under-value authenticity’s benefits even more – or simply refuse to acknowledge that privacy is in tension with civility and usability, among other values. Continue reading →