Facebook Triggers Another False Alarm over Corporate “Censorship”

by on May 23, 2010 · 0 comments

Leo Laporte claimed today on Twitter that Facebook had censored Texas radio station, KNOI Real Talk 99.7 by banning them from Facebook “for talking about privacy issues and linking to my show and Diaspora [a Facebook competitor]. Since Leo has a twitter audience of 193,884 followers and an even larger number of listeners to his This Week In Tech (TWIT) podcast, this charge of censorship (allegedly involving another station, KRBR, too) will doubtless attract great deal of attention, and helped to lay the groundwork for imposing “neutrality” regulations on social networking sites—namely, Facebook.

Problem is: it’s just another false alarm in a long series of unfounded and/or grossly exaggerated claims. Facebook spokesman Andrew Noyes responded:

The pages for KNOI and KRBR were disabled because one of our automated systems for detecting abuse identified improper actions on the account of the individual who also serves as the sole administrator of the Pages. The automated system is designed to keep spammers and potential harassers from abusing Facebook and is triggered when a user sends too many messages or seeks to friend too many people who ignore their requests. In this case, the user sent a large number of friend requests that were rejected. As a result, his account was disabled, and in consequence, the Pages for which he is the sole administrator were also disabled. The suggestion that our automated system has been programmed to censor those who criticize us is absurd.

Absurd, yes, but when the dust has settled, how many people will remember this technical explanation, when the compelling headline is “Facebook Censors Critics!”? There is a strong parallel here to arguments for net neutrality regulations, which always boil down to claims that Internet service providers will abuse their “gatekeeper” or “bottleneck” power to censor speech they don’t like or squelch competitive threats. Here are just a few of the silly anecdotes that are constantly bandied about in these debates as a sort of “string citation” of the need for regulatory intervention:

  • August 2007: AT&T was attacked for “censoring” a webcast of a Pearl Jam concert by bleeping out words critical of our former Dear Leader, George W. Bush. AT&T quickly apologized for the incident, explaining that an over-anxious contractor hired to bleep out indecent content had simply gone overboard.
  • October 2007: A low-level Verizon employee initially declined to issue a short messaging (SMS) code to NARAL—allegedly because of anti-abortion bias.  But the company quickly realized what happened, under heavy public criticism, reversed this decision within barely 24 hours of it coming to public attention, as Tim Lee and Adam Thierer noted.
  • The Christian Coalition still supports Net Neutrality regulation, apparently because they have been conned into thinking that Comcast deliberately singled out the Bible when it throttled BitTorrent traffic back in 2007. (The AP reporter who wrote the original story very cleverly chose to transmit a small text file of the Bible by bit torrent, knowing that it would be the throttle just like all other bit torrent packets, but making for a better headline.)
  • Most recently Free Press and Public Knowledge blatantly fabricated another phoney censorship “incident” in which Sprint supposedly blocked Catholic Charities from getting an SMS code for Haiti fundraising efforts.

These tempests-in-teapots seem rather silly in retrospect, but that doesn’t stop advocates sweeping, prophylactic “neutrality” regulations from citing them incessantly as “evidence” (the plural of “anecdote,” as everyone knows!) of the need for government to intervene swiftly.

Such stories are sure to arise with increasing frequency regarding social networking sites and search engines—especially when the underlying process that causes someone’s speech to be removed is technically complicated and difficult for bloggers and journalists to understand or explain succinctly to a lay audience. I squelched just such a story back in 2008, when some conservative critics of net neutrality regulations unfairly leapt to the conclusion that Google was censoring my think tank, The Progress & Freedom Foundation, because of our skeptical views on net neutrality. In fact, Google’s anti-malware service had flagged our site as potentially dangerous because our SQL database had been hacked, potentially exposing visitors to our site to malicious code—and our webmaster simply didn’t deal with the problem.

SNSes like Facebook and search engines like Google necessarily rely on automated processes to police content and protect users from real threats. Inevitably, these processes will make mistakes, or will sometimes punish people for good reason who leap to the conclusion that they are being censored for their views. As Adam Thierer and I wrote last October in our Forbes.com editorial, “Net Neutrality, Slippery Slopes & High-Tech Mutually Assured Destruction,” warning that the rationale for net neutrality would inevitably extent to online services and applications—to the great detriment of online innovation and the consumers it benefits:

Sincere defenders of real Internet Freedom—that is, freedom from government techno-meddling—recognize that there will always be disputes over how companies deal with each other online across all layers of the Internet. The question is not whether we need a technical coordinating mechanism for handling such disputes. Someone should mediate conflicts over alleged deviations from abstract neutrality principles. But should that arbitrator be an inherently political body like FCC? Or should we instead look to truly independent, apolitical arbitrators like the Internet Engineering Task Force or collaborative efforts like the Network Neutrality Squad? Such alternative dispute resolution mechanisms and fora need not have the power of law to be effective: The weight of their expert opinion, based on careful investigation of the facts, would likely resolve most disputes, because companies have strong reputational incentives to comply with reasoned rulings by truly neutral experts. And the white hot spotlight of public attention has a way of disciplining marketplace behavior as well.

So I’m quite serious when I call for the creation of something like an “Internet Corporate Censorship Squad” of objective experts to “separate the wheat from the chaff” when it comes to these claims (although, to be precise, “censorship” is something only governments can do). Until that happens, we’ll be left with dealing with blogosphere-frenzy over “censorship” after another—at least, until the government decides to fix the non-existent “problem” by imposing ex ante, prophylactic regulation.

Previous post:

Next post: