No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help

by on September 13, 2022 · 0 comments

[Cross-posted from Medium.]

In an age of hyper-partisanship, one issue unites the warring tribes of American politics like no other: hatred of “Big Tech.” You know, those evil bastards who gave us instantaneous access to a universe of information at little to no cost. Those treacherous villains! People are quick to forget the benefits of moving from a world of Information Poverty to one of Information Abundance, preferring to take for granted all they’ve been given and then find new things to complain about.

But what mostly unites people against large technology platforms is the feeling that they are just too big or too influential relative to other institutions, including government. I get some of that concern, even if I strongly disagree with many of their proposed solutions, such as the highly dangerous sledgehammer of antitrust breakups or sweeping speech controls. Breaking up large tech companies would not only compromise the many benefits they provide us with, but it would undermine America’s global standing as a leader in information and computational technology. We don’t want that. And speech codes or meddlesome algorithmic regulations are on a collision course with the First Amendment and will just result in endless litigation in the courts.

There’s a better path forward. As President Ronald Reagan rightly said in 1987 when vetoing a bill to reestablish the Fairness Doctrine, “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.” In other words, as I wrote in a previous essay about “The Classical Liberal Approach to Digital Media Free Speech Issues,” more innovation and competition are always superior to more regulation when it comes to encouraging speech and speech opportunities.

Can Government Get Things Just Right?

But what about the accusations we hear on both the left and right about tech companies failing to properly manage or moderate online content in some fashion? This is not only a concern for today’s most popular social media platforms, but it is a growing concern for the so-called Metaverse, where questions about content policies already surround activities and interactions on AR and VR systems.

The problem here is that different people want different things from digital platforms when it comes to content moderation. As I noted in a column for The Hill late last year:

there is considerable confusion in the complaints both parties make about “Big Tech.” Democrats want tech companies doing more to limit content they claim is hate speech, misinformation, or that incites violence. Republicans want online operators to do less, because many conservatives believe tech platforms already take down too much of their content.

Thus, large digital intermediaries are expected to make all the problems of the world go away through a Goldilocks formula whereby digital platforms will get content moderation “just right.” It’s an impossible task with billions of voices speaking. Bureaucrats won’t do a better job refereeing these disputes, and letting them do so will turn every content spat into an endless regulatory proceeding.

What Algorithms Can and Cannot Do to Help

But we should be clear on one thing: These disputes will always be with us because every media platform in history has had some sort of content moderation policies, even if we didn’t call them that until recently. Creating what used to just be called guidelines or standards for information production and dissemination has always been a tricky business. But the big difference between the old and new days comes down to three big problems:

#1- the volume problem: There’s just a ton of content online to moderate today compared to the past.

#2- the subjectivity problem: Content moderation always involves “eye of the beholder” questions, but now there’s even more of those problems because of Problem #1.

#3- the crafty adversaries problem: There are a lot of people bound and determined to get around any rules or restrictions platforms impose, and they’ll find creative ways to do so.

These problems are nicely summarized in an excellent new AEI report by Alex Feerst on, “The Use of AI in Online Content Moderation.” This is the fifth in a series of new reports from the AEI’s Digital Platforms and American Life project. The goal of the project is to highlight how the “democratization of knowledge and influence comes with incredible opportunities but also immense challenges. How should policymakers think about the digital platforms that have become embedded in our social and civic life?” Various experts have been asked to sound off on that question and address different challenges. The series kicked off in April with an essay I wrote on “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” More studies are coming.

In Feerst’s new report, the focus is squarely on the issue of algorithmic content moderation policies and procedures. Feerst provides a brilliant summary of how digital media platforms currently utilize AI to assist their content moderation efforts. He notes:

The short answer to the question “why AI” is scale — the sheer never-ending vastness of online speech. Scale is the prime mover of online platforms, at least in their current, mainly ad-based form and maybe in all incarnations. It’s impossible to internalize the dynamics of running a digital platform without first spending some serious time just sitting and meditating on the dizzying, sublime amounts of speech we are talking about: 500 million tweets a day comes out to 200 billion tweets each year. More than 50 billion photos have been uploaded to Instagram. Over 700,000 hours of video are uploaded to YouTube every day. I could go on. Expression that would previously have been ephemeral or limited in reach under the existing laws of nature and pre-digital publishing economics can now proliferate and move around the world. It turns out that, given the chance, we really like to hear ourselves talk.

So that’s the scale/volume problem in a nutshell. Algorithmic systems are absolutely going to be needed to help do some sifting and sorting, therefore.

What Do You Want to Do about Man-Boobs?

But then we immediately run into the subjectivity problem that pervades so many content moderation issues. When it comes to topics like hate speech, “There will be as many opinions as there are people. Three well-meaning civic groups will agree on four different definitions of hate speech,” Feerst notes.

Indeed, these eye-of-the-beholder judgment calls are ubiquitous and endlessly frustrating for content moderators. Let me tell you a quick story I told a Wall Street Journal reporter who asked me in 2019 why I gave up helping tech companies figure out how to handle these content moderation controversies. I had spent many years trying to help companies and trade associations figure this stuff out because I had been writing about these challenges since the late 1990s. But then finally I gave up. Why? Because of man boobs. Yes, man boobs. Here’s the summary of my story from that WSJ article:

Adam Thierer, a senior research fellow at the right-leaning Mercatus Center at George Mason University, says he used to consult with Facebook and other tech companies. The futility of trying to please all sides hit home after he heard complaints about a debate at YouTube over how much skin could be seen in breast-feeding videos.

While some argued the videos had medical purposes, other advisers wondered whether videos of shirtless men with large mammaries should be permitted as well. “I decided I don’t want to be the person who decides on whether man boobs are allowed,” says Mr. Thierer.

No, seriously. This has been one of the many crazy problems that content moderators have had to deal with. There are scumbag dudes with large mammaries who not only salaciously jiggle them around on camera for the world to see, but then even put whipped cream on their own boobs and lick it off. Now, if a woman does that and posts it on almost any mainstream platform, it’ll get quickly flagged (probably by an algorithmic filter) and probably immediately blocked. But if a dude with man boobs does the same thing, shouldn’t the policy be the same? Well, in our still very sexist world of double standards, policies can vary on that question. And I didn’t want any part of trying to figure out an answer to that question (and others like it), so I largely got out of the business of helping companies do so. Not even King Solomon could figure out a fair resolution to some of this stuff.

Algorithms can only help us so much here because, at some point, humans must tell the machines what to flag or block using some sort of subjective standard that will lead to all sorts of problems later. This is one reason why Feerst reminds us of another important rule here: “Don’t confuse a subjectivity problem for an accuracy problem, especially when you’re using automation technology.” As he notes:

If the things we’re doing are controversial among humans and it’s not even clear that humans judge them consistently, then using AI is not going to help. It’s just going to allow you to achieve the same controversial outcomes more quickly and in greater volume. In other words, if you can’t get 50 humans to agree on whether a particular post violates content rules, whether that content rule is well formulated, or whether that rule should exist, then why would automating this process help?

So Many Troublemakers (Sometimes Accidental)

The man boobs moderation story also reminds us that the crafty adversary problem will always haunt us, too. There are just so many bastards out there looking to cause trouble for whatever reason. “There will never be ‘set it and forget it’ technologies for these issues,” Feerst argues. “At best, it’s possible to imagine a state of dynamic equilibrium — eternal cops and robbers.”

That is exactly right. It’s a never-ending learning/coping process, as I noted in my earlier paper in the AEI series: “There is no Goldilocks formula that can get things just right” when it comes to many tech governance issues, especially content moderation issues. Muddling through is the new normal. And the exact same process is now unfolding for Metaverse content moderation. Algorithmic moderation helps us weed out the worst stuff and gives us a better chance of letting humans — with their limited time and resources — deal with the hardest problems (and problem-makers) out there.

Sometimes the content infractions may even be accidental. Here’s another embarrassing story involving me. I was asked last year to sit in on a VR meeting about content moderation in the Metaverse. I was wearing my headset and sitting at a virtual table with about 8 other people in the room. Back in my real-world office, I had my coffee mug sitting far to the right of me on a side table. After about 45 minutes of discussion, I realized that every time I reached way over to my right to grab my coffee mug in the real-world, my virtual self’s hand was reaching over and touching the crotch of the guy sitting next to me in the Metaverse! It looked like I was fondling the dude virtually! What a nightmare. I’m surprised someone didn’t report me for virtual harassment. I would have had to plead the coffee mug defense and throw myself on the mercy of the Meta-Court judge or jury.

Ok, so that’s a funny story, but you can imagine little mistakes like this happening all throughout the Metaverse as we slowly figure out how to interact normally in new virtual environments. We’ll have to rely on users and algorithms flagging some of the worst behaviors and then have humans evaluate the tough calls to the best of their abilities. But let’s not be fooled into thinking that humans can handle all these questions because the task at hand is too overwhelming and expensive for many platform operators. “Ten thousand employees here, ten thousand ergonomic mouse pads there, and pretty soon we’re talking about real money,” Feerst notes. “This is what the cost of running a platform looks like, once you’ve internalized the harmful and inexorable externalities we’ve learned about the hard way over the past decade.”

The Problem with “Explainability”

The key takeaway here is that content moderation at scale is messy, confusing, and unsatisfying. Do platforms need to be more transparent about how their algorithms work to do this screening? Yes, they do. But perfect transparency or “explainability” is impossible.

It’s hard to perfectly explain how algorithms work for the same reason it’s hard for your car mechanic to explain to you exactly how your car engine works. Except it’s even harder with algorithmic systems. As Feerst notes:

AI outputs can be hard to explain. In some cases, even the creators or managers of a particular product are no longer sure why it is functioning a particular way. It’s not like the formula to Coca-Cola; it’s constantly evolving. Requirements to “disclose the algorithm” may not help much if it means that companies will simply post a bunch of not especially meaningful code.

And if explainability was mandated by law, it’d instantly be gamed by still other troublemakers out there. A mandate to make AI perfectly transparent is an open invitation to every scam artist in the world to game platforms with new phishing attacks, spammy scams, and other such nonsense. Again, this is the “crafty adversaries” problem at work. Endless cat-and-mouse or, as Feerst says “eternal cops and robbers.”

So, in sum, content moderation — including algorithmic content moderation — is a nightmarishly difficult task, and there is no Goldilocks formula available to us that will help us get things just right. It’ll always just be endless experimentation and iteration with lots and lots of failures along the way. Learning by doing and constantly refining our systems and procedures is the key to helping us muddle through.

And if you think government will somehow figure this all out through some sort of top-down regulatory regime, ask yourself how well that worked out for Analog Era efforts to create “community standards” for broadcast radio and television. And then multiply that problem by a zillion. It cannot be done without severely undermining free speech and innovation. We don’t want to go down that path.

____________

Additional Reading

· “Again, We Should Not Ban All Teens from Social Media

· “The Classical Liberal Approach to Digital Media Free Speech Issues

· “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead

· “Left and right take aim at Big Tech — and the First Amendment

· “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer

· “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers

· “Conservatives & Common Carriage: Contradictions & Challenges

· “The Great Deplatforming of 2021

· “A Good Time to Re-Read Reagan’s Fairness Doctrine Veto

· “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet

· “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality

· “Sen. Hawley’s Moral Panic Over Social Media

· “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’

· “The Not-So-SMART Act

· “The Surprising Ideological Origins of Trump’s Communications Collectivism

Previous post:

Next post: