The Amorality of Google Searches

by on October 5, 2006 · 8 comments

Nick Carr has a puzzling post about the fact that a Google search for “Martin Luther King, Jr.” returns, as its first result, a white supremacist website. The first sentence of the post is “It’s funny how a set of instructions – an algorithm – written by people can come to be granted, by those same people, a superhuman authority.”

One might expect the post to go on to identify someone granting Google’s algorithm a superhuman authority. One would be wrong. Carr writes that AOL, which has licensed Google’s search engine for its own site, “finds itself in the uncomfortable position of promoting the white supremacist site to its customers.” And Google says its results are generated automatically, and so it “can’t tweak the results because of that automation and the need to maintain the integrity of the results.” Carr concludes scornfully that Google believes that “human judgment is an unfit substitute for the mindless, automated calculations of an algorithm.”

This is silly. It’s not hard to understand why Google would be reluctant to second guess the results of its algorithm. The issue isn’t that human judgment is inferior to algorithmic results. The problem is that human judgment is incredibly expensive compared with computing power. Because obviously, this wouldn’t be the only example of manipulated search results. I’m sure the White House would immediately write to Google about this result. Tens of thousands of other individuals and groups who believe some search result or another had slighted them would come out of the woodworks to complain. Google would have to hire some new staffers to come up with some rules governing when search results get suppressed, and then they’d have to hire a bunch more staffers to apply these rules to thousands and thousands of individual complaints.


This would have two detrimental effects. First, it would make Google more, not less, vulnerable to criticism for its search results. Right now, it can plausibly say that the search results are an objective reflection of the collective judgment of other websites. Once Google starts reordering some search results, any offensive results that remain carry a much stronger Google imprimatur. Hence, once Google altered one result, it would face increased pressure to review every controversial result.

But the more serious problem is that it’s not clear that such labor-intensive re-ranking of websites would actually improve the results. The point of a search engine is to help people find things. It’s far more important to include all the good stuff than it is to remove all the bad stuff. Now, judged from this light, it seems to me that the search in question does a pretty good job. In addition to the white supremacist site, it presents 8 highly relevant pages from sources sympathetic to Dr. King, as well as one critical article from the slightly nutty LewRockwell.com. If my goal is to find out information about Dr. King, this strikes me as an entirely satisfactory list of search results. Sure, a search that had 10 reputable sources would have been marginally better than 8, but all you have to do is click “Next” to see another page of results.

Frankly, I don’t think I’d want a Google staffer sanitizing my results for me. All human editors make mistakes, and I’m far more concerned about a Google staffer removing a link I might have wanted to see than I am about the occasional crackpot website showing up in search results.

To some extent, this is an example of one of the fallacies I discussed a couple of weeks ago in my post about critics of spontaneous order: Focusing on failures in individual cases while ignoring the fact that the process as a whole dramatically outperforms centrally planned alternatives. Human editing might improve this one search result, but such editing couldn’t possibly scale to encompass any significant fraction of the overall search engine.

But I think it also illustrates another fallacy about spontaneous order that ought to be added to my list: insisting on evaluating impersonal, amoral processes in personal, moral terms. It’s silly for Carr to be offended or outraged that the white supremacist site is the top result. The reason we’d be outraged by such a recommendation from a human editor is that it would be evidence of racist motives. But that’s clearly not the case in this search, so there’s no cause for outrage. Yet Carr insists on holding this impersonal algorithm to the same standards we would apply to a human being performing the same task.

As with my other fallacies, this too has parallels to other examples of spontaneous order. Most obviously, many people find it very difficult to avoid anthropomorphizing the process of evolution. We see the same tendency in criticisms of the free market. People are offended when the Backstreet Boys or The World is Flat tops the best-seller lists, despite their obvious lack of artistic or literary merit. People tend to anthropomorphize the market and then berate it for its poor taste. But no libertarian would claim that the market always delivers the best products. All we claim is simply that on average, it will tend to serve consumer needs better than more centralized alternatives. That doesn’t imply that there will never be a bad outcome in individual cases.

The “wisdom of crowds” is an unfortunate phrase. Crowds often possess more information collectively than any of them possess individually, but they do not possess wisdom. Processes to aggregate their shared information can be extremely useful, but crowds are not big people, and it’s not appropriate to judge them as if they were. We might expect wisdom from people, but we ought to be satisfied with useful information from impersonal processes.

Comments on this entry are closed.

Previous post:

Next post: