The Internet’s greatest blessing — its general openness to all speech and speakers — is also sometimes its biggest curse. That is, you cannot expect to have the most widely accessible, unrestricted communications platform the world has ever known and not also have some imbeciles who use it to spew insulting, vile, and hateful comments.
It is important to put things in perspective, however. Hate speech is not the norm online. The louts who spew hatred represent a small minority of all online speakers. The vast majority of online speech is of a socially acceptable — even beneficial — nature.
Still, the problem of hate speech remains very real and a diverse array of strategies are needed to deal with it. The sensible path forward in this regard is charted by Abraham H. Foxman and Christopher Wolf in their new book, Viral Hate: Containing Its Spread on the Internet. Their book explains why the best approach to online hate is a combination of education, digital literacy, user empowerment, industry best practices and self-regulation, increased watchdog / press oversight, social pressure and, most importantly, counter-speech. Foxman and Wolf also explain why — no matter how well-intentioned — legal solutions aimed at eradicating online hate will not work and would raise serious unintended consequences if imposed.
In striking this sensible balance, Foxman and Wolf have penned the definitive book on how to constructively combat viral hate in an age of ubiquitous information flows.
Definitional Challenges & Free Speech Concerns
Defining “hate speech” is a classic eye-of-the-beholder problem: At what point does heated speech become hate speech and who should be in charge of drawing the line between the two? “The notion of a single definition of hate speech that everyone can agree on is probably illusory,” Foxman and Wolf note, especially because of “the continually evolving and morphing nature of online hate.” (p. 52, 103) “Like every other form of human communication, bigoted or hateful speech is always evolving, changing its vocabulary and style, adjusting to social and demographic trends, and reaching out in new ways to potentially receptive new audiences.” (p. 92)
Many free speech advocates (including me) argue that the government should not be in the business of ensuring that people never have their feelings hurt. Censorial solutions are particularly problematic here in the United States since they would likely run afoul of the protections secured by the First Amendment of the U.S. Constitution.
The clear trajectory of the Supreme Court’s free speech jurisprudence over the past half-century has been in the direction of constantly expanding protection for freedom of expression, even of the most repugnant, hateful varieties. Most recently, in Snyder v. Phelps, for example, the Court ruled that the Westboro Baptist Church could engage in hateful protests near the funerals of soldiers. “[T]his Nation has chosen to protect even hurtful speech on public issues to ensure that public debate is not stifled,” ruled Chief Justice John Roberts for the Court’s 8-1 majority. The Court has also recently held that the First Amendment protects lying about military honors (United States v. Alvarez, 2012), animal cruelty videos (United States v. Stevens, 2010), computer-generated depictions of child pornography (Ashcroft v. Free Speech Coalition, 2002), and the sale of violent video games to minors (Brown v. EMA, 2011). This comes on top of over 15 years of Internet-related jurisprudence in which courts have struck down every effort to regulate online expression.
Some will celebrate this jurisprudential revolution; others with lament it. Regardless, it is likely to remain the constitutional standard here in the U.S. As a result, there is almost no chance that courts here would allow restrictions on hate speech to stand. That means alternative approaches will continue to be relied upon to address it.
Foxman and Wolf acknowledge these constitutional hurdles but also point out that there are other reasons why “laws attempting to prohibit hate speech are probably one of the weakest tools we can use against bigotry.” (p. 171) Most notably, there is the scope and volume problem: “the sheer vastness of the challenge” (p. 103) which means “it’s simply impossible to monitor and police the vast proliferation of bigoted content being distributed through Web 2.0 technologies.” (p. 81) “The borderless nature of the Internet means that, like chasing cockroaches, squashing on offending website, page, or service provider does not solve the problem; there are many more waiting behind the walls — or across the border.” (p. 82) That’s exactly right and it also explains why solutions of a more technical nature aren’t likely to work very well either.
Foxman and Wolf also point out how hate speech laws could backfire and have profound unintended consequences. Beyond targeted laws that address true threats, harassment, and direct incitements to violence, Foxman and Wolf argue that “broader regulation of hate speech may send an ‘educational message’ that actually weakens rather than strengthens our system of democratic values.” (p. 171) That’s because such censorial laws and regulations undermine the very essence of deliberative democracy — robust exchange of potential controversial views — and leads to potential untrammeled majoritarianism. Worse yet, legalistic attempts to shut down hate speech can end up creating martyrs for fringe movements and, paradoxically, end up fueling conspiracy theories. (p. 80)
The Essential Role of Counter-speech & Education
Yet, “the challenge of defining hate speech shouldn’t lead us to give up on solving the problem,” argue Foxman and Woff. (p. 53) We must, they argue, refocus our efforts around “education as a bulwark of freedom.” (p. 170) Digital literacy — teaching citizens respectful online behavior — is the key to those education efforts.
A vital part of digital literacy efforts is the encouragement of counter-speech solutions to online hate. “[T]he best anecdote to hate speech is counter-speech – exposing hate speech for its deceitful and false content, setting the record straight, and promoting the values of respect and diversity,” note Foxman and Wolf. (p. 129) Or, as the old saying goes, the best response to bad speech is better speech. This principle has infused countless Supreme Court free speech decisions over the past century and it continues to make good sense. But we could do more through education and digital literacy efforts to encourage more and better forms of counter-speech going forward.
“Counter-speech isn’t only or even primarily about debating hate-mongers,” they note. “It’s about helping to create a climate of tolerance and openness for people of all kinds, not just on the Internet but in every aspect of local, community, and national life.” (p. 146) This is how digital literacy becomes digital citizenship. It’s about forming smart norms and personal best practices regarding beneficial online interactions.
Intermediary Policing
What more can be done beyond education and counter-speech efforts? Foxman and Wolf envision a broad and growing role for intermediaries to help to police viral hate. “We are convinced that if much of the time and energy spent advocating legal action against hate speech was used in collaborating and uniting with the online industry to fight the scourge of online hate, we would be making more gains in this fight,” they say. (p. 121) Among the steps they would like to see online operators take:
- Establishing clear hate speech policies in their Terms of Service and mechanisms for enforcing them;
- Making it easier for users to flag hate speech and to speak out against it;
- Facilitating industry-wide education and best practices via multi-stakeholder approaches; and
- Limiting anonymity and moving to “real-name” policies to identify speakers.
De-anonymization / Real-name policies
Most of these are imminently sensible solutions that should serve as best practices for online service providers and social media platform operators. But their last suggestion for sites to consider limiting anonymous speech will be controversial, especially at a time when many feel that privacy is already at serious risk online and when some critics argue that intermediaries already “censor” too much content as it is. (See, for example, this Jeff Rosen essay on “The Delete Squad: Google, Twitter, Facebook and the New Global Battle over the Future of Free Speech” and this Evgeny Morozov editorial, “You Can’t Say That on the Internet”).
Anonymous online speech certainly facilitates plenty of nasty online comments. There’s plenty of evidence — both scholarly and anecdotal — that “deindividuation” occurs when people can post anonymously. As Foxman and Wolf explain it: “People who are able to post anonymously (or pseudonymously) are far more likely to say awful things, sometimes with awful effects. Speaking from behind a blank wall that shields a person from responsibility encourages recklessness – it’s far easier to hit the ‘send’ button without a second thought under those circumstances.” (p. 114)
On the other hand, there needs to be a sense of balance here. We protect anonymous speech for the same reason we protect all other forms of speech, no matter how odious: With the bad comes a lot of good. Forcing all users to identify themselves to get at handful of troublemakers is overkill and it would result in the chilling of a huge amount of legitimate speech.
Nonetheless, many governments across the globe are pushing for restrictions on anonymous speech. As Cole Stryker noted in his recent book, Hacking the Future: Privacy, Identity, and Anonymity on the Web, “we are seeing is an all-out war on anonymity, and thus free speech, waged by a variety of armies with widely diverse motivations, often for compelling reasons.” (p. 229). Stryker is right. In fact, less than two weeks ago, a French court ordered Twitter to produce the names of the people behind anti-Semitic tweets that appeared on the site last year. Meanwhile, plenty of academics, including many here in the U.S., have stepped up their efforts to ban or limit online anonymity. If you don’t believe me, I suggest you read a few of the chapters of The Offensive Internet: Speech, Privacy, and Reputation (Saul Levmore & Martha C. Nussbaum, eds.). It’s a veritable fusillade against anonymity as well as Section 230, the U.S. law that limits liability for intermediaries who post materials by others.
In Viral Hate, Foxman and Wolf stop short of suggesting legal restrictions on anonymity, preferring to stick with experimentation among private intermediaries. One of the book’s authors (Wolf) penned an essay in The New York Times last November (“Anonymity and Incivility on the Internet”) suggesting that while “this is not a matter for government… it is time for Internet intermediaries voluntarily to consider requiring either the use of real names (or registration with the online service) in circumstances, such as the comments section for news articles, where the benefits of anonymous posting are outweighed by the need for greater online civility.” Specifically, Wolf wants the rest of the Net to follow Facebook’s lead: “It is time to consider Facebook’s real-name policy as an Internet norm because online identification demonstrably leads to accountability and promotes civility.”
These proposals prompted strong responses from some academics and average readers who decried the implications of such a move for both privacy and free speech. But, again, it is worth reiterating that Foxman and Wolf do not call for government mandates to achieve this. “[T]his notion of promulgating a new standard of accountability online is not a matter for government intervention, given the strictures of the First Amendment,” they argue. (p. 117)
However, Foxman and Wolf do suggest one innovative alternative that merits attention: premium placement for registered commenters. The New York Times and some other major content providers have experimented with premium placement, whereby those registered on the site have their comments pushed up in the queue while other comments appear down below them. On the other hand, I don’t like the idea of having to register for every news or content site I visit, so I would hope such approaches are used selectively. Another useful approach involves letting users of various social media sites and content services to determine whether they wish to allow comments on their user-generated content at all. Of course, many sites and services (such as YouTube, Facebook, and most blogging services) already allow that.
Conclusion
There are times in the book when Foxman and Wolf push their cause with a bit too much rhetorical flair, as when they claim that “Hitler and the Nazis could never have dreamed of such an engine of hate (as the Internet”). (p. 10) Perhaps there is something to that, but it is also true that Hitler and the Nazis could have never of dreamed of a platform for individual empowerment, transparency, and counter-speech such as the Internet. It was precisely because they were able to control the very limited media and communications platforms of their age that the Nazis were about to exert total control over the information systems and create a propaganda hate machine that had no serious challenge from the public or other nations. Just ask Arab dictators which age they’d prefer to rule in! It is certainly much harder for today’s totalitarian thugs to keep secrets bottled up and it is equally hard for them to spread lies and hateful propaganda without being met with a forceful response from the general citizenry as well as those in other nations. So the “Hitler-would-have-loved-the-Net” talk is unwarranted.
I’m also a bit skeptical of some of the metrics used to measure this problem. While there is clearly plenty of online hate to be found across the Net today, efforts to quantify it inevitably run right back into the same subjective definition problems that Foxman and Wolf do such a nice job explaining throughout the text. So, if we have such a profound ‘eye-of-the-beholder’ problem at work here, how is it that we can be sure that quantitative counts are accurate? That doesn’t mean I’m opposed to efforts to quantify online hate, rather, we just need to take such measures with a grain of salt.
Finally, I wish the authors would have developed more detailed case studies of how companies outside the mainstream are dealing with these issues today. Foxman and Wolf focus on big players like Google, Facebook, and Twitter for obvious reasons, but plenty of other online providers and social media operators have policies and procedures in place today to deal with online hate speech. A more thorough survey of those differing approaches might have helped us gain a better understanding of which policies make the most sense going forward.
Despite those small nitpicks, Foxman and Wolf have done a great service here by offering us a penetrating examination of the problem of online hate speech while simultaneously explaining the practical solutions necessary to combat it. Some will be dissatisfied with their pragmatic approach to the issue, feeling on one hand that the authors have not gone far enough in bringing in the law to solve these problems, while others will desire a more forceful call for freedom of speech and just growing a thicker skin in response to viral hate. But I believe Foxman and Wolf have struck exactly the right balance here and given us a constructive blueprint for addressing these vexing issues going forward.