Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
I believe that it is useful to be reminded bluntly that the dark forces are out there and that it is too easy to forget that truth by imposing rules that obscure it. As Oscar Wilde wrote in a different context, “Man is least in himself when he talks in his own person. Give him a mask, and he will tell you the truth.” Too many of us like to think that we have made great progress in human relations and that little remains to be done. Unmoderated comments provide an antidote to such ridiculous conclusions. It’s not like the rest of us don’t know those words and hear them occasionally, depending on where we choose to tread, but most of us don’t want to have to confront them.
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
But I am heartened by the fact that such comments do not go unchallenged by readers. In fact, comment strings are often self-correcting and provide informative exchanges. If somebody says something ridiculous, somebody else will challenge it. And there is wit.
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
I applaud Mr. Feaver for this. This is a struggle not just for journalists at major media outlets but also for bloggers like us here at the TLF. There are times when very annoying, even hurtful things are said by anonymous commenters here at the TLF. Our policy, however, has generally been to allow a vibrant exchange of views, except in the rare circumstances where the commenter utters racial epithets or starts issuing death threats. Or, if a specific commenter goes into “stalker mode” and does nothing but post harassing, irrelevant comments all day, then those will occasionally be discarded. But, generally speaking, it’s “anything goes” here. (We even allow spam!) Each author, however, is free to decide for themselves where to draw the line, but we all generally err on the side of completely unmoderated exchange for the reasons Feaver lists. We know it is far more likely that we’ll get hostile anonymous comments rather than nice ones, but it’s good to get feedback of all varieties, even when it’s nasty.
From a policy perspective, however, this issue is taking on greater weight because some folks believe that unmoderated, anonymous user comments result in harassment, hate speech, defamation, or privacy violations. As a result, there has been a growing chorus of critics who claim something must be done to remedy this problem. Cass Sunstein and Richard Thaler, for example, have advocated a Civility Check that “can accurately tell whether the email you’re about to send is angry and caution you, “warning: this appears to be an uncivil email. do you really and truly want to send it?”” The state of Kentucky has considered legislation that would ban online anonymity, even though that would be clearly unconstitutional. Respected law school professors such as Mark Lemley and Daniel Solove have toyed with the idea of DMCA-like “notice-and-takedown” regime for potentially defamatory comments online. I once even heard Cal-Western law school professor Nancy S. Kim suggest that blogs, social networking sites, and other interactive sites should institute a “cooling off period” to address cyber-harassment. By requiring all those seeking to comment to wait a certain length of time before a message or image is posted to a website, she hoped that some commenters might choose to tone down or even remove the potentially offending messages or images.
Of course, it is more likely that readers would just choose not to comment at all if any of these proposals where enshrined into law. And that gets to the heart of what’s wrong with any potential legal response to this “problem” of online anonymous, unmoderated speech and user comments: It will massively chill free speech and expression. Sure, that would get rid of the hecklers and the jackasses who cause grief for some, but it would also deprive us of the many constructive user comments and criticisms that make the online experience — for better or worse — the most open, vibrant exchange of views ever known to man.
Finally, it goes without saying that this debate is fundamentally tied up with the future of Section 230 and the question of intermediary liability. Currently, online service providers of all flavors are generally not required to police or screen user comments or force users to be authenticated and reveal their identities before posting comments. Section. 230 has been the key to protecting intermediaries from punishing liability that would otherwise force them to severely curtail online expression, or run the risk of being driven under by the weight of endless lawsuits. This is why I have argued that Sec. 230 is “the cornerstone of ‘Internet freedom’ in its truest and best sense of the term.” But, again, all this could change if we are not vigilant in defending Sec. 230.
OK, now that I’ve made this impassioned defense of unmoderated and completly anonymous online exchange, let the hateful comments fly!
Anonymity, Reader Comments & Section 230
by Adam Thierer on April 9, 2009 · 18 comments
Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
I applaud Mr. Feaver for this. This is a struggle not just for journalists at major media outlets but also for bloggers like us here at the TLF. There are times when very annoying, even hurtful things are said by anonymous commenters here at the TLF. Our policy, however, has generally been to allow a vibrant exchange of views, except in the rare circumstances where the commenter utters racial epithets or starts issuing death threats. Or, if a specific commenter goes into “stalker mode” and does nothing but post harassing, irrelevant comments all day, then those will occasionally be discarded. But, generally speaking, it’s “anything goes” here. (We even allow spam!) Each author, however, is free to decide for themselves where to draw the line, but we all generally err on the side of completely unmoderated exchange for the reasons Feaver lists. We know it is far more likely that we’ll get hostile anonymous comments rather than nice ones, but it’s good to get feedback of all varieties, even when it’s nasty.
From a policy perspective, however, this issue is taking on greater weight because some folks believe that unmoderated, anonymous user comments result in harassment, hate speech, defamation, or privacy violations. As a result, there has been a growing chorus of critics who claim something must be done to remedy this problem. Cass Sunstein and Richard Thaler, for example, have advocated a Civility Check that “can accurately tell whether the email you’re about to send is angry and caution you, “warning: this appears to be an uncivil email. do you really and truly want to send it?”” The state of Kentucky has considered legislation that would ban online anonymity, even though that would be clearly unconstitutional. Respected law school professors such as Mark Lemley and Daniel Solove have toyed with the idea of DMCA-like “notice-and-takedown” regime for potentially defamatory comments online. I once even heard Cal-Western law school professor Nancy S. Kim suggest that blogs, social networking sites, and other interactive sites should institute a “cooling off period” to address cyber-harassment. By requiring all those seeking to comment to wait a certain length of time before a message or image is posted to a website, she hoped that some commenters might choose to tone down or even remove the potentially offending messages or images.
Of course, it is more likely that readers would just choose not to comment at all if any of these proposals where enshrined into law. And that gets to the heart of what’s wrong with any potential legal response to this “problem” of online anonymous, unmoderated speech and user comments: It will massively chill free speech and expression. Sure, that would get rid of the hecklers and the jackasses who cause grief for some, but it would also deprive us of the many constructive user comments and criticisms that make the online experience — for better or worse — the most open, vibrant exchange of views ever known to man.
Finally, it goes without saying that this debate is fundamentally tied up with the future of Section 230 and the question of intermediary liability. Currently, online service providers of all flavors are generally not required to police or screen user comments or force users to be authenticated and reveal their identities before posting comments. Section. 230 has been the key to protecting intermediaries from punishing liability that would otherwise force them to severely curtail online expression, or run the risk of being driven under by the weight of endless lawsuits. This is why I have argued that Sec. 230 is “the cornerstone of ‘Internet freedom’ in its truest and best sense of the term.” But, again, all this could change if we are not vigilant in defending Sec. 230.
OK, now that I’ve made this impassioned defense of unmoderated and completly anonymous online exchange, let the hateful comments fly!