Morozov’s Algorithmic Auditing Proposal: A Few Questions

by on November 19, 2012 · 3 comments

In a New York Times op-ed this weekend entitled “You Can’t Say That on the Internet,” Evgeny Morozov, author of The Net Delusion, worries that Silicon Valley is imposing a “deeply conservative” “new prudishness” on modern society. The cause, he says, are “dour, one-dimensional algorithms, the mathematical constructs that automatically determine the limits of what is culturally acceptable.” He proposes that some form of external algorithmic auditing be undertaken to counter this supposed problem. Here’s how he puts it in the conclusion of his essay:

Quaint prudishness, excessive enforcement of copyright, unneeded damage to our reputations: algorithmic gatekeeping is exacting a high toll on our public life. Instead of treating algorithms as a natural, objective reflection of reality, we must take them apart and closely examine each line of code.

Can we do it without hurting Silicon Valley’s business model? The world of finance, facing a similar problem, offers a clue. After several disasters caused by algorithmic trading earlier this year, authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modifications of computer systems used in such trades. Why couldn’t auditors do the same to Google?

Silicon Valley wouldn’t have to disclose its proprietary algorithms, only share them with the auditors. A drastic measure? Perhaps. But it’s one that is proportional to the growing clout technology companies have in reshaping not only our economy but also our culture.

It should be noted that in a Slate essay this past January, Morozov had also proposed that steps be taken to root out lies, deceptions, and conspiracy theories on the Internet.  Morozov was particularly worried about “denialists of global warming or benefits of vaccination,” but he also wondered how we might deal with 9/11 conspiracy theorists, the anti-Darwinian intelligent design movement, and those that refuse to accept the link between HIV and AIDS.

To deal with that supposed problem, he recommended that Google “come up with a database of disputed claims” or “exercise a heavier curatorial control in presenting search results,” to weed out such things. He suggested that the other option “is to nudge search engines to take more responsibility for their index and exercise a heavier curatorial control in presenting search results for issues” that someone (he never says who) determines to be conspiratorial or anti-scientific in nature.

Taken together, these essays can be viewed as a preliminary sketch of what could become a comprehensive information control apparatus instituted at the code layer of the Internet. Morozov absolutely refuses to be nailed down on the details of that system, however. In a response to his earlier Slate essay, I argued that Morozov seemed to be advocating some sort of Ministry of Truth for online search, although he came up short on the details of who or what should play that role. But in both that piece and his New York Times essay this weekend, he implies that greater oversight and accountability are necessary.  “Is it time for some kind of a quality control system [for the Internet]?” he asked in his Slate oped. Perhaps it would be the algorithmic auditors he suggests in his new essay. But who, exactly, are those auditors? What is the scope of their powers?

When I (and others) made inquiries via Twitter requesting greater elaboration on these questions, Morozov summarily dismissed any conversation on the point. Worse yet, he engaged in what is becoming a regular Morozov debating tactic on Twitter: nasty, sarcastic, dismissive responses that call into question the intellectual credentials of anyone who even dares to ask him a question about his proposals.  Unless you happen to be Bruno Latour — the obtuse French sociologist and media theorist who Morozov showers with boundless, adorning praise — you can usually count on Morozov to dismiss you and your questions or concerns in a fairly peremptory fashion.

I’m perplexed by what leads Morozov to behave so badly. When I first met him a couple of years ago, it was at a Georgetown University event he invited me to speak at. He seemed like an agreeable, even charming, fellow in person. But on Twitter, Morozov bears his fangs at every juncture and spits out venomous missives and retorts that I would call sophomoric except that it would be an insult to sophomores everywhere. Morozov even accuses me of “trolling” him whenever I ask him questions on Twitter, even though I am doing nothing more that posing the same sort of hard questions to him that he regularly poses to others (albeit in a much more snarky fashion).  He always seems eager to dish it out, but then throws a Twitter temper tantrum whenever the roles are reversed and the tough questions come his way. Perhaps Morozov is miffed by some of what I had to say in my mixed review of his first book, The Net Delusion, or my Forbes column that raised questions about his earlier proposal for an Internet “quality control” regime.  But I invite others to closely read the tone of those two essays and tell me whether I said anything to warrant Morozov’s wrath. (In fact, I actually said some nice things about his book in that review and later named it the most important information technology policy book of the year.)

Regardless of what motivates his behavior, I do not think it is unreasonable to ask for more substantive responses from Morozov when he is making grand pronouncements and recommendations about how online culture and commerce should be governed. The best I could get him to say on Twitter is that is that he only had 1,200 words to play with in his latest Times oped and that more details about his proposal would be forthcoming. Well, in the spirit of getting that conversation going, allow me to outline a few questions:

1)      What is the specific harm here that needs to be addressed?

  • Do you have evidence of systematic algorithmic manipulation or abuse by Google, Apple, or anyone else, for that matter? Or is this all just about a handful of anecdotes that seemed to be corrected fairly quickly?

2)      What standard or metric should we use to determine the extent of this problem, to the extent we determine it is a problem at all?

  • To the extent autocomplete results are what troubles you, can you explain how individuals or entities are “harmed” by those results?
  • If this is about reputation, what is your theory of reputational harm and when it is legally actionable?
  • If this is about informational quality or “truth,” can you explain what would constitute success?
  • Can you appreciate the concerns / values on the other side of this that might motivate some degree of algorithmic tailoring? For example, some digital intermediaries may seek to curb the use of a certain amount of vulgarity, hate speech, or other offensive content on their sites since they are broad-based platforms with diverse audiences. (That’s why most search providers default to “moderate” filtering for image searches, for example.) While I think we both favor maximizing free speech online, do you accept that some of this private speech and content balancing is entirely rational and has, to some extent, always gone on? Also, aren’t there plenty of other ways to find the content you’re looking for besides just Google, which you seem preoccupied with?

3)      What is the proposed remedy and what are its potential costs and unintended consequences?

  • Can you explain the mechanism of control that you would like to see put in place to remedy this supposed problem? Would it be a formal regulatory regime?
  • Have you considered the costs and /or potentially unintended consequences associated with an algorithmic auditing regime if it takes on a regulatory character?
  • For example, if you are familiar with how long many regulatory proceedings can take to run their course, do you not fear the consequences of interminable delays and political gaming?
  • How often should the “auditing” you propose take place? Would it be a regular affair, or would it be driven by complaints?

4)      Is this regime national in scope? Global? How will it be coordinated /administered?

  • In the United States, presumably the Federal Communications Commission or Federal Trade Commission would be granted new authority to carry out algorithmic audits, or would a new entity need to be created?
  • Is additional regulatory oversight necessary and, if so, how is this coordinated by nationally and globally?

5)      Are there freedom of speech / censorship considerations that flow from (3) and (4)?

  • At least in the United States, algorithmic audits that had the force of law behind them could raise serious freedom of speech concerns. (See Yoo’s paper on “architectural censorship” and the recent work of Volokh & Grimmelmann on search regulation) and long-settled First Amendment law (see, e.g., Tornillo) ensures that editorial discretion is housed in private hands. How would you propose we get around these legal obstacles?

6)      Are there less-restrictive alternatives to administrative regulation?

  • Might we be able to devise various alternative dispute resolution techniques to flag problems and deal with them in a non-regulatory / non-litigious fashion?
  • Could voluntary industry best practices and/or codes of conduct be developed to assist these efforts?
  • Could an entity like the Broadband Internet Technical Advisory Group (BITAG) help sort out “neutrality” claims in this context, as they do in the broadband context?
  • Might it be the case that social norms and pressure can keep this problem in check? The very act of shining light on silly algorithmic screw-ups — much as you have in your recent opeds — has a way of keeping this problem in check.

I hope that Morozov finds these questions to be reasonable. My skepticism of most Internet regulation is no secret, so I suppose that Morozov or others might attempt to dismiss some of these questions as the paranoid delusions of a wild-eyed libertarian. But I suspect that I’m not the only one who feels uneasy with Morozov’s proposals since they could open the door to regulators across the globe to engage in “algorithmic auditing” on the flimsy assumption that some great harm exists from a few silly autocomplete suggestions or a couple conspiratorial websites. We deserve answers to questions like these before we start calling in the Code Cops to assume greater control over online speech.

Previous post:

Next post: