Internet Filters Reconsidered

by on May 24, 2006 · 6 comments

Inevitably, almost all battles about Internet content controls become battles about the effectiveness of Internet filters. That’s because, from the start, many have held out hope that private filters can offer families, schools, libraries and others the opportunity to block objectionable content without getting government involved in the ugly business of Net censorship.

But there have always been filtering critics and, ironically, they come from two very different camps. On one hand, we often hear policymakers or pro-regulation activist groups lamenting the fact that filters are UNDER-inclusive, or miss too much objectionable online content. Indeed, rumors are that the Department of Justice is currently engaged in major effort to build a legal case against filters as an effective private blocking tool. If the government was able to successfully make such a case to the courts, it might help them undo a decade’s worth of jurisprudence that has been built upon the belief that filters offered a “less-restrictive means” of addressing objectionable content compared to vague, over-broad government content control efforts.

Cutting in the opposite direction, many librarians, free expression groups and others have long criticized filters on the grounds that that they are far too OVER-inclusive. These critics consider filters to be fundamentally flawed because they often block access to sites that contain important information. Early examples included filters that blocked access to breast cancer websites because they contained the word “breast,” or others that blocked access to Republican Majority Leader Dick Armey’s website because the word “dick” was blocked by the filter. Moreover, the critics of filter over-inclusiveness also point out that, despite their technical nature, filtering technologies are ultimately quite stupid and depend on the subjective values / morality of their creators. For example, if a filter maker decides that websites discussing homosexual issues were offensive to him, then anyone using his software wouldn’t be able to access those sites either.

These competing anti-filtering forces are still at war today. Not only is the DOJ trying to build a case against private filters, but new bills are being introduced in Congress and pro-regulatory critics are engaged in new efforts to question the effectiveness of filters. (They either want a government-approved filter or want online intermediaries to rid the Net of all content they find “indecent.”) Meanwhile, filters have again come under attack from the folks up at the Free Expression Policy Project (FEPP), which is part of the Brennan Center for Justice. They have just released a revised edition of their “Internet Filters: A Public Policy Report,” which mostly criticizes filters for their over-inclusiveness.


The report was authored by Marjorie Heins, Christina Cho, and Ariel Feldman. (Incidentally, Marjorie Heins is the author of one of the best histories of indecency regulation and government censorship efforts: “Not in Front of the Children: ‘Indecency,’ Censorship, and the Innocence of Youth.” Highly recommended reading for those of you interested in this issue.)

In the updated FEPP study, the authors argue that the over-blocking problem hasn’t improved despite the growth of the filtering market. Citing a litany of over-blocking examples from numerous independent studies and news reports, the authors conclude that “filters are still seriously flawed. They continue to deprive their users of many thousands of valuable Web pages, on subjects ranging from war and genocide to safer sex and public health…. There are much more effective ways to address concerns about offensive Internet content. Filters provide a false sense of security, while blocking large amounts of important information in an often irrational or biased way.” In sum, they argue that the proposed cure (filters) may be worst than the disease (potential exposure to objectionable content).

As a fervent defender of freedom of speech and expression, it goes without saying that I am sympathetic to some of the concerns raised by the FEPP report. I hate to think that citizens, including children, are potentially being denied access to valuable information. On the other hand, we DO have a problem here that needs to be addressed somehow: Underage access to potentially objectionable material.

Clearly, every family has a different set of values when it comes to what types of media content are acceptable in their home (or at schools and libraries). While it’s probably silly to generalize in this manner, let’s try to break families into at least 3 basic groups in terms of how they assimilate children into the media universe:

(1) The “shield their eyes and ears” approach: Some families will prefer to take a very restrictive approach to their children’s media exposure / education. They will want to block many different types of media / online content, perhaps using extremely restrictive methods and technologies. To this group, the over-inclusiveness issue associated with filters is probably a non-issue. They will be willing to make some sacrifices in terms of lost information / education to offset the possibility of unwanted exposure to what they regard as highly objectionable or morally repugnant content.

(2) The “talk to them about it” approach: Other families will take a more open, education-based approach to media exposure. They’ll let their kids see and hear a lot more than parents in Group 1. And when their kids run into troubling stuff, they’ll talk to them about it. But many of these folks will also take advantage of filtering tools, at least early in their children’s lives.

(3) “Hear no evil, see no evil” families: Still others might not even think about the issue of blocking access to content, online or offline. They might employ some basic ground rules or perhaps use some rudimentary screening techniques, but not give it much more thought than that. The kids will largely be given free rein to go where they want and do what they want in the media universe. (Incidentally, this was my family when I was growing up. Of course, we didn’t have the Internet in our homes back in the 1970s to deal with! It was just TV and VCRs.)

Let me again stress that I’m guilty of unfair generalizations here since most families are hybrids of these three models. Or, some families might adopt one of these approaches early in the life of their children (probably #1) and then switch to another (#2 or 3) later when their kids are teens.

Anyway, the reason I’m going through this exercise is simply to make the point that a significant percentage of the American citizenry would identify themselves are firmly in the first (“shield their eyes and ears”) group. And even those that would self-identify with the second group (like my wife and me), would probably argue that, at least in the early years, they are probably employing something akin to a group 1 strategy.

The point is, a lot of people want to shield their kids from objectionable content. Some more than others and for longer than others. At a minimum, a large percentage of families will at least want to put a few “speed bumps” in the way to make sure that their kids aren’t cruising down the dark alleys of cyberspace on their own.

And so we get back to filters. They are certainly flawed. Highly imperfect, in fact. Over-inclusive at times; under-inclusive at other times. But, at the end of the day, they are the best private sector solution we’ve got at our disposal as parents today. Of course, you could adopt a group 3 strategy and just turn a blind eye to some of the more troubling aspects of the Internet. But I don’t think most parents want to do that. Do I really want to have a frank discussion with my very young daughter about some of the bizarre sexual perversions often on display in cyberspace? Uh, no. That’s why, despite their flaws, I’ll use filters until my kids reach an age when I feel comfortable gradually scaling back their use or eliminating them entirely from our computing devices.

And although Internet filters are “crude and error-prone” as the FEPP report suggests, I’m willing to live with it so long as:

(a) the government is not mandating that the filters be used or forcing their own filters upon us; and,

(b) there is a healthy market for competing filtering software / technology (for private use, of course).

Ironically, on this last point, the FEPP report almost seems to lament the fact that since they published their first report in 2001, the filter market has grown from about 19 major filters to over 133 today. Because all filters share certain over-blocking issues, FEPP seems to feel that the proliferation of more filters can only have negative consequences. But the fact is that not all filters share the exact same over-blocking problems. They don’t all block the exact same sites or topics or code words. So let a thousand flowers bloom, I say (or at least 133). Competition among filters might actually help us get to a better world with more specialized blocking software to suit all tastes and needs. (Indeed, this is already the case. The wonderful “GetNetWise.org” website allows parents to search for filters that meet their specific tastes / values. See: http://kids.getnetwise.org/tools/ ).

Moreover, let’s not forget that, despite their many flaws, filters help us avoid the nasty alternative of government censorship. While the existence of all these filters will never stop policymakers or other critics from calling for more government regulation, they will at least provide us with an affirmative defense when cases go to court again and again. Private filters really do represent a “less restrictive means” of dealing with this issue and, despite over- and under-blocking flaws, will always be preferable to government regulation of online speech.

Thus, contrary to what FEPP argues, the cure is definitely NOT worse than the disease because if filters did not exist or were not used, government would have a much easier time justifying a role of itself as content cop for the Internet.

Of course, if the DOJ somehow convinces the courts to buy into the argument that private filters are not effective enough, we could be looking at a world where government imposes its own pre-approved filters on us. [And, as the FEPP report points out, that’s what the government is already doing to libraries today as part of the Children’s Internet Protection Act (CIPA) of 2000. That measure demanded that schools and libraries receiving federal funding MUST use filters or else lose that funding.]

But think of how silly it would be for the government to spend all this time and money making a legal case against the efficacy of private filters and then propose their own public filters as a solution. Would one big government filter for all of us be better than 133 private filters that can at least partially be tailored to our specific needs? I think not. We know that any government-approved filters would be over-inclusive in the extreme but they would probably also be under-inclusive and fail to capture much of the content (especially from overseas) that some found objectionable.

So, despite my continuing reservations about private filters, I remain convinced that the devil that you know is better than then one you don’t. I can deal with the short-comings of private filters in terms of both over- and under-inclusiveness. A single government solution, by contrast, is just too frightening to think about.

Having said all this, I do hope that private filtering companies take a hard look at the FEPP report and search for ways to improve their technologies to avoid the sort of over-blocking that FEPP rightly bemoans. There’s clearly a lot of room for improvement on this front.

Comments on this entry are closed.

Previous post:

Next post: