The AutoAdmit Case and the Future of Sec. 230

by on February 16, 2009 · 27 comments

David Margolick has penned a lengthy piece for Portfolio.com about the AutoAdmit case, which has important ramifications for the future of Section 230 and online speech in general. Very brief background: AutoAdmit is a discussion board for students looking to enter, or just discuss, law schools. Some threads on the site have included ugly — insanely ugly — insults about some women.  A couple of those women sued to reveal the identities of their attackers and hold them liable for supposedly wronging them.  The case has been slowly moving through the courts ever since. Again, read Margolick’s article for all the details.  The important point here is that the women could not sue AutoAdmit directly for defamation or harassment because Section 230 of the Communications Decency Act of 1996 immunizes websites from liability for the actions of their users.  Consequently, those looking to sue must go after the actual individuals behind the comments which (supposedly) caused the harm in question.

I am big defender of Section 230 and have argued that it has been the cornerstone of Internet freedom. Keeping online intermediaries free from burdensome policing requirements and liability threats has created the vibrant marketplace of expression and commerce that we enjoy today. If not for Sec. 230, we would likely live in a very different world today.

Sec. 230 has come under attack, however, from those who believe online intermediaries should “do more” to address various concerns, including cyber-bullying, defamation, or other problems.  For those of us who believe passionately in the importance of Sec. 230, the better approach is to preserve immunity for intermediaries and instead encourage more voluntary policing and self-regulation by intermediaries, increased public pressure on those sites that turn a blind eye to such behavior to encourage them to change their ways, more efforts to establish “community policing” by users such that they can report or counter abusive language, and so on.

Of course, those efforts will never be fool proof and a handful of bad apples will still be able to cause a lot of grief for some users on certain discussion boards, blogs, and so on.  In those extreme cases where legal action is necessary, it would be optimal if every effort was exhausted to go after the actual end-user who is causing the problem before tossing Sec. 230 and current online immunity norms to the wind in an effort to force the intermediaries to police speech.  After all, how do the intermediaries know what is defamatory?  Why should they be forced to sit in judgment of such things?  If, under threat of lawsuit, they are petitioned by countless users to remove content or comments that those individuals find objectionable, the result will be a massive chilling effect on online free speech since those intermediaries would likely play is safe most of the time and just take everything down.

Which brings up back to the danger of a 230 backlash following the AutoAdmit case. As Margolick notes of the case:

By any standard, the plaintiffs’ catch has been meager. Even with one of the country’s top intellectual-property lawyers, backed by a super-elite law firm, going after them, most of the worst offenders got off scot-free. The fact that so few prey were netted could prompt calls to modify Section 230(c), if only to give victims of internet abuse more of a chance. Brian Leiter, the professor and vocal critic of AutoAdmit, sees it coming. He calls the free pass enjoyed by Google and other carriers “a disaster” and says change is inevitable. “The point at which some senator’s daughter becomes the target of this kind of campaign of online vilification and harassment on the next iteration of AutoAdmit — something’s going to happen,” predicted Leiter, who now teaches at the University of Chicago Law School.

Unfortunately, although I obviously don’t agree with Prof. Leiter about Sec. 230 being “a disaster,” I think he’s right to assume that one particularly visible and sensitive case could end up bringing 230 back up for political reconsideration.  Commenting on this on the Info/Law blog, William McGeveran of the University of Minnesota Law School summarizes my own feelings regarding the potential danger we face going forward:

the response will be a dramatic evisceration or even elimination of Section 230 immunity. We might end up with some kind of notice-and-takedown regime that could be abused just as it is in the DMCA setting, allowing anyone to effectively force the elimination of web content they dislike with the mere untested allegation that it was tortious. Worse, we might see an effort to repeal section 230 altogether, making it impossible to run an open online forum for user-generated content without risking significant liability.

Indeed, there have already been calls for variants of a notice-and-takedown regime for speech put forward by law professors such as Mark Lemley and Daniel Solove. And the rising calls at the state level for legislation to address cyber-bullying could become another pressure point in the movement to deputize the middleman.

Importantly, however, the “[AutoAdmit] case has already made a difference,” Margolick notes:

Things have calmed down on AutoAdmit, where, Cohen says, he’s driven away the worst actors and enlisted volunteer moderators. Some post­ers, moreover, have announced their “retirement”; any further self-expression, they’ve concluded, is clearly not worth the risk. Thanks to the case, casual defamers — those who take potshots for sport — may now refrain out of empathy for the plaintiffs, while the more malicious may have been intimidated into silence. The case may also have helped Heller and Iravani [the plantiffs in the case] cleanse their Google pages, as the old slurs have fallen farther down the screen. And last spring, Cohen quietly removed the offending threads. He’d have done so sooner, he says, had he been asked more nicely.

This gets back to my point about how self-regulation, social norms, and public pressure can be an effective way to counter online harassment without resorting to major changes in law or liability norms. Again, I think Sec. 230 is worth preserving and the efforts to tinker with it are likely to open a Pandora’s Box of problems for intermediaries and average users alike.  The question going forward is, do we let the presence of a few bad apples online justify the complete upending of a legal standard that has made the Internet the most vibrant platform for free speech that the world has ever known?  I certainly hope not.

____________

Some additional reading on the case:

  • MikeRT

    The standard of evidence for an automatic takedown should be fairly high, and should include a provision that allows for one to become guilty of perjury or harassment on easier grounds than the DMCA allows. The law should be written such that had you retained counsel you would have known that it was even “probably not tortious,” that should be automatic grounds for the other party, if found innocent, to raid your bank account and property until you're in the poor house as vengeance for taking them to court and violating their rights.

  • Ryan Radia

    I think the current definitions of slander and libel are quite reasonable. Whether Adam is a scumbag or not is a matter of opinion, as is the question of whether he's a responsible adult. I should be able to express harsh value judgments about others without fear of lawsuit so long as I do not misrepresent positive statements about them. On the other hand, were I to assert that Adam has beaten children in the past and that statement turned out to be false, then I should be on the hook for libel.

  • Ryan Radia

    If I understand Eugene Volokh's post about this subject correctly, if an online intermediary is hosting content that defames me and I subsequently sue the intermediary, not only is the intermediary immune from liability under Sec. 230, but it's also next to impossible for me to obtain an injunction ordering the intermediary to remove the defamatory content. All I can get is a court order forcing the intermediary to disclose any server logs that might shed light on the identity of the individual responsible for the defamatory posting. That, however, can be a real crapshoot.

    JuicyCampus (the college gossip forum that was all over the news about a year ago) maintains a privacy policy that guarantees users that the site cannot associate IP addresses with particular posts. Under current law, victims of defamation on sites like JuicyCampus who go to court are out of luck–they can't ascertain the identity of the guilty party, nor can they get the defamatory content taken down.

    Implementing a DMCA notice-and-takedown system for allegedly defamatory postings seems like a bad idea, as the chilling effects of such a regime would probably outweigh any benefits. Do you think there are any other, more reasonable alternatives that would afford victims of anonymous defamation at least some recourse? Perhaps it should be easier for victims to get injunctions against intermediaries in cases where there is prime facie evidence that defamation has in fact occurred. Or maybe some sort of data retention mandate should be attached to the requirements for intermediaries that wish to retain their immunity from defamation suits. Both of these ideas seem somewhat un-libertarian, and maybe there's just no way of dealing with online defamation without making anonymous speech a whole lot harder to exercise. I hope that's not the case, though.

  • Ryan Radia

    I think the current definitions of slander and libel are quite reasonable. Whether Adam is a scumbag or not is a matter of opinion, as is the question of whether he's a responsible adult. I should be able to express harsh value judgments about others without fear of lawsuit so long as I do not misrepresent positive statements about them. On the other hand, were I to assert that Adam has beaten children in the past and that statement turned out to be false, then I should be on the hook for libel.

  • Ryan Radia

    If I understand Eugene Volokh's post about this subject correctly, if an online intermediary is hosting content that defames me and I subsequently sue the intermediary, not only is the intermediary immune from liability under Sec. 230, but it's also next to impossible for me to obtain an injunction ordering the intermediary to remove the defamatory content. All I can get is a court order forcing the intermediary to disclose any server logs that might shed light on the identity of the individual responsible for the defamatory posting. That, however, can be a real crapshoot.

    JuicyCampus (the college gossip forum that was all over the news about a year ago) maintains a privacy policy that guarantees users that the site cannot associate IP addresses with particular posts. Under current law, victims of defamation on sites like JuicyCampus who go to court are out of luck–they can't ascertain the identity of the guilty party, nor can they get the defamatory content taken down.

    Implementing a DMCA-style notice-and-takedown system for allegedly defamatory postings seems like a bad idea, as the chilling effects of such a regime would probably outweigh any benefits. Do you think there are any other, more reasonable alternatives that would afford victims of anonymous defamation at least some recourse? Perhaps it should be easier for victims to get injunctions against intermediaries in cases where there is prime facie evidence that defamation has in fact occurred. Or maybe some sort of data retention mandate should be attached to the requirements for intermediaries that wish to retain their immunity from defamation suits. Both of these ideas seem somewhat un-libertarian, and maybe there's just no way of dealing with online defamation without making anonymous speech a whole lot harder to exercise. I hope that's not the case, though.

  • Ryan Radia

    I think the current definitions of slander and libel are quite reasonable. Whether Adam is a scumbag or not is a matter of opinion, as is the question of whether he's a responsible adult. I should be able to express harsh value judgments about others without fear of lawsuit so long as I do not misrepresent positive statements about them. On the other hand, were I to assert that Adam has beaten children in the past and that statement turned out to be false, then I should be on the hook for libel.

  • Ryan Radia

    If I understand Eugene Volokh's post about this subject correctly, if an online intermediary is hosting content that defames me and I subsequently sue the intermediary, not only is the intermediary immune from liability under Sec. 230, but it's also next to impossible for me to obtain an injunction ordering the intermediary to remove the defamatory content. All I can get is a court order forcing the intermediary to disclose any server logs that might shed light on the identity of the individual responsible for the defamatory posting. That, however, can be a real crapshoot.

    JuicyCampus (the college gossip forum that was all over the news about a year ago) maintains a privacy policy that guarantees users that the site cannot associate IP addresses with particular posts. Under current law, victims of defamation on sites like JuicyCampus who go to court are out of luck–they can't ascertain the identity of the guilty party, nor can they get the defamatory content taken down.

    Implementing a DMCA-style notice-and-takedown system for allegedly defamatory postings seems like a bad idea, as the chilling effects of such a regime would probably outweigh any benefits. Do you think there are any other, more reasonable alternatives that would afford victims of anonymous defamation at least some recourse? Perhaps it should be easier for victims to get injunctions against intermediaries in cases where there is prime facie evidence that defamation has in fact occurred. Or maybe some sort of data retention mandate should be attached to the requirements for intermediaries that wish to retain their immunity from defamation suits. Both of these ideas seem somewhat un-libertarian, and maybe there's just no way of dealing with online defamation without making anonymous speech a whole lot harder to exercise. I hope that's not the case, though.

  • Ryan Radia

    If I understand Eugene Volokh's post about this subject correctly, if an online intermediary is hosting content that defames me and I subsequently sue the intermediary, not only is the intermediary immune from liability under Sec. 230, but it's also next to impossible for me to obtain an injunction ordering the intermediary to remove the defamatory content. All I can get is a court order forcing the intermediary to disclose any server logs that might shed light on the identity of the individual responsible for the defamatory posting. That, however, can be a real crapshoot.

    JuicyCampus (the college gossip forum that was all over the news about a year ago) maintains a privacy policy that guarantees users that the site cannot associate IP addresses with particular posts. Under current law, victims of defamation on sites like JuicyCampus who go to court are out of luck–they can't ascertain the identity of the guilty party, nor can they get the defamatory content taken down.

    Implementing a DMCA-style notice-and-takedown system for allegedly defamatory postings seems like a bad idea, as the chilling effects of such a regime would probably outweigh any benefits. Do you think there are any other, more reasonable alternatives that would afford victims of anonymous defamation at least some recourse? Perhaps it should be easier for victims to get injunctions against intermediaries in cases where there is prime facie evidence that defamation has in fact occurred. Or maybe some sort of data retention mandate should be attached to the requirements for intermediaries that wish to retain their immunity from defamation suits. Both of these ideas seem somewhat un-libertarian, and maybe there's just no way of dealing with online defamation without making anonymous speech a whole lot harder to exercise. I hope that's not the case, though.

  • Pingback: Digital Natives » Building Communities: Tumblr and Freedom of Expression

  • Pingback: The Future of Sec. 230 and Online Immunity: My Debate with Harvard’s John Palfrey | The Technology Liberation Front

  • Pingback: Craig’s List Sued for Prostitution | The Technology Liberation Front

  • Pingback: Anonymity, Reader Comments & Section 230 | The Technology Liberation Front

  • Pingback: Emerging Threats to Section 230 — Technology Liberation Front

  • Pingback: GetUnvarnished.com: Should We Allow User Feedback about Personal Reputation?

  • Pingback: TLFers Attending Two Important Sec. 230 / Net Liability Events in CA This Week

  • Pingback: Undiplomatic Immunity - Jotwell: Cyberlaw

  • Pingback: Why SOPA Threatens the DMCA Safe Harbor

  • Pingback: Toni Pierce

  • Pingback: devenir rentier

  • Pingback: Thousand Guineas

  • Pingback: north topsail beach rentals

  • Pingback: australia 1300 numbers

  • Pingback: fantasy premier league

  • Pingback: James Hamilton

  • Pingback: Texas Lotto

  • Pingback: casinot

  • Pingback: charlotte startups

Previous post:

Next post: