Thoughts on Content Moderation Online

by on March 25, 2021 · 0 comments

Content moderation online is a newsworthy and heated political topic. In the past year, social media companies and Internet infrastructure companies have gotten much more aggressive about banning and suspending users and organizations from their platforms. Today, Congress is holding another hearing for tech CEOs to explain and defend their content moderation standards. Relatedly, Ben Thompson at Stratechery recently had interesting interviews with Patrick Collison (Stripe), Brad Smith (Microsoft), Thomas Kurian (Google Cloud), and Matthew Prince (Cloudflare) about the difficult road ahead re: content moderation by Internet infrastructure companies.

I’m unconvinced of the need to rewrite Section 230 but like the rest of the Telecom Act—which turned 25 last month–the law is showing its age. There are legal questions about Internet content moderation that would benefit from clarifications from courts or legal scholars.

(One note: Social media common carriage, which some advocates on the left, right, and center have proposed, won’t work well, largely for the same reason ISP common carriage won’t work well—heterogeneous customer demands and a complex technical interface to regulate—a topic for another essay.)

The recent increase in content moderation and user bans raises questions–for lawmakers in both parties–about how these practices interact with existing federal laws and court precedents. Some legal issues that need industry, scholar, and court attention:

Public Officials’ Social Media and Designated Public Forums

Does Knight Institute v. Trump prevent social media companies’ censorship on public officials’ social media pages?

The 2nd Circuit, in Knight Institute v. Trump, deemed the “interactive space” beneath Pres. Trump’s tweets a “designated public forum,” which meant that “he may not selectively exclude those whose views he disagrees with.” For the 2nd Circuit and any courts that follow that decision, the “interactive space” of most public officials’ Facebook pages, Twitter feeds, and YouTube pages seem to be designated public forums.

I read the Knight Institute decision when it came out and I couldn’t shake the feeling that the decision had some unsettling implications. The reason the decision seems amiss struck me recently:

Can it be lawful for a private party (Twitter, Facebook, etc.) to censor members of the public who are using a designated public forum (like replying to President Trump’s tweets)? 

That can’t be right. We have designated public forums in the physical world, like when a city council rents out a church auditorium or Lions Club hall for a public meeting. All speech in a designated public forum is accorded the strong First Amendment rights found in traditional public forums. I’m unaware of a case on the subject but a court is unlikely to allow the private owner of a designated public forum, like a church, to censor or dictate who can speak when its facilities are used as a designated public forum.

The straightforward implication from Knight Institute v. Trump seems to be that neither politicians nor social media companies can make viewpoint-based decisions about who can comment on or access an official’s social media account.

Knight Institute creates more First Amendment problems than it solves, and could be reversed someday. [Ed. update: In April 2021, the Supreme Court vacated the 2nd Circuit decision as moot since Trump is no longer president. However, a federal district court in Florida concluded, in Attwood v. Clemons, that public officials’ “social media accounts are designated public forums.” The Knight Institute has likewise sued Texas Attorney General Paxton for blocking user and claimed that his social media feed is a designated public forum. It’s clear more courts will adopt this rule.] But to the extent Knight Institute v. Trump is good law, it seems to limit how social media companies moderate public officials’ pages and feeds.

Cloud neutrality

How should tech companies, lawmakers, and courts interpret Sec. 512?

Wired recently published a piece about “cloud neutrality,” which draws on net neutrality norms of nondiscrimination towards content and applies them to Internet infrastructure companies. I’m skeptical of the need or constitutionality of the idea but, arguably, the US has a soft version of cloud neutrality embedded in Section 512 of the DMCA.

The law conditions the copyright liability safe harbor for Internet infrastructure companies only if: 

the transmission, routing, provision of connections, or storage is carried out through an automatic technical process without selection of the material by the service provider.

17 USC § 512(a).

Perhaps a copyright lawyer can clarify, but it appears that Internet infrastructure companies may lose their copyright safe harbor if they handpick material to censor. To my knowledge, there is no scholarship or court decision on this question.

State Action

What evidence would a user-plaintiff need to show that their account or content was removed due to state action?

Most complaints of state action for social media companies’ content moderation are dubious. And while showing state action is hard to prove, in narrow circumstances it may apply. The Supreme Court test has said that when there is a “sufficiently close nexus between the State and [a] challenged action,” the action of a private company will be treated as state action. For that reason, content removals made after non-public pressure or demands from federal and state officials to social media moderators likely aren’t protected by the First Amendment or Section 230.

Most examples of federal and state officials privately jawboning social media companies will never see the light of day. However, it probably occurs. Based on Politico reporting, for instance, it appears that state officials in a few states leaned on social media companies to remove anti-lockdown protest events last April. It’s hard to know exactly what occurred in those private conversations, and Politico has updated the story a few times, but examples like that may qualify as state action.

Any public official who engages in non-public jawboning resulting in content moderation could also be liable to a Section 1983 claim–civil liability for deprivation of an affected user’s constitutional rights.

Finally, what should Congress do about foreign state action that results in tech censorship in the US? A major theme of the Stretechery interviews ist that many tech companies feel pressure to set their moderation standards based on what foreign governments censor and prohibit. Content removal from online services because of foreign influence isn’t a First Amendment problem, but it is a serious free speech problem for Americans.

Many Republicans and Democrats want to punish large tech companies for real or perceived unfairness in content moderation. That’s politics, I suppose, but it’s a damaging instinct. For one thing, the Section 230 fixation distract free-market and free-speech advocates from, among other things, alarming proposals for changes to the FEC that empower it to criminalize more political speech. The singular focus on Section 230 repeal-reform distracts from these other legal questions about content moderation. Hopefully the Biden DOJ or congressional hearings will take some of these up.

Previous post:

Next post: