Today I filed comments with the Federal Communications Commission (FCC) in its proceeding examining the marketplace for “advanced blocking technologies.” This proceeding was required under the “Child Safe Viewing Act of 2007,” which Congress passed last year and President Bush signed last December. The goal of the bill and the FCC’s proceeding (MB 09-26) is to study “advanced blocking technologies” that “may be appropriate across a wide variety of distribution platforms, including wired, wireless, and Internet platforms.” My colleagues will no doubt laugh about the fact that I have dropped an absurd 150 pages worth of comments on the FCC in this matter, but I had a lot to say on this topic! Parental controls, child safety, and free speech issues have been the focus of much of my research agenda over the past 10 years.
In my filing, I argue that the FCC should tread carefully in this matter since the agency has no authority over most of the media platforms and technologies described in the Commission’s recent Notice of Inquiry. Moreover, any related mandates or regulatory actions in in this area could diminish future innovation in this field and would violate the First Amendment rights of media creators and consumers alike. The other major conclusions of my filing are as follows:
- There exists an unprecedented abundance of parental control tools to help parents decide what constitutes acceptable media content in their homes and in the lives of their children.
- There is a trade-off between complexity and convenience for both tools and ratings, and no parental control tool is completely foolproof.
- Most homes have no need for parental control technologies because parents rely on other methods or there are no children in the home.
- The role of household media rules and methods is underappreciated and those rules have an important bearing on this debate.
- Parental control technologies work best in combination with educational efforts and parental involvement.
- The search for technological silver-bullets and “universal” solutions represent a quixotic, Holy Grail-like quest and it will destroy innovation in this marketplace.
- Enforcement of “household standards” made possible through use of parental controls and other methods negates the need for “community standards”-based content regulation.
My entire filing can be found here and down below in a Scribd reader. All comments in the matter are due tomorrow and then reply comments are due on May 18th.
Continue reading →
Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
I believe that it is useful to be reminded bluntly that the dark forces are out there and that it is too easy to forget that truth by imposing rules that obscure it. As Oscar Wilde wrote in a different context, “Man is least in himself when he talks in his own person. Give him a mask, and he will tell you the truth.” Too many of us like to think that we have made great progress in human relations and that little remains to be done. Unmoderated comments provide an antidote to such ridiculous conclusions. It’s not like the rest of us don’t know those words and hear them occasionally, depending on where we choose to tread, but most of us don’t want to have to confront them.
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
But I am heartened by the fact that such comments do not go unchallenged by readers. In fact, comment strings are often self-correcting and provide informative exchanges. If somebody says something ridiculous, somebody else will challenge it. And there is wit.
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or
how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
Continue reading →
This week, a federal judge blocked a prosecutor from filing child pornography charges against three teenage girls in northeastern Pennsylvania over risque cell phone pictures they took of themselves. This respite from the bizarre “sexting” scandal allows time for a national dialogue on an issue that goes deeper than simple changes in technology.
“Sexting” is short for “sex texting,” or the practice of sending racy pictures via text message. Twenty percent of teens admit to distributing nude photos of themselves, according to a recent survey by the National Campaign to Support Teen and Unplanned Pregnancy — a statistic that probably disturbs parents but shouldn’t surprise anyone who remembers what being a teenager was like.
Teenage hormones are almost always raging, and many teens are reckless and looking for attention. Deploying child pornography laws to deal with this reality is like using a sledgehammer to kill a fly. If the girls are found guilty of these overblown charges, they would face not only the possibility of jail time, but also the requirement to register as sexual offenders for at least 10 years.
Clearly, such harsh punishment would be overkill, but the situation is indicative of the growing mentality that government must play the central role in fixing every problem society encounters.
Whether disciplining teens or restructuring failed automobile companies, government is more often than not becoming the “go-to” place for help. Those on both the political left and right have been involved in this slow move to relinquish individual responsibility in favor of government control, so there is plenty of blame to go around.
[…]
Read more
here.
There’s a great article in Online Media Daily that sums up all the reasons why New Jersey should not pass proposed legislation that requires social networking websites to be liable for abusive and harassing communications occurring on their sites.
A3757 was introduced this session and is part of a package of Internet safety legislation put forth by Attorney General Anne Milgram. The bill essentially strong-arms social networking sites into placing a conspicuous “report abuse” icon on web pages and to respond to and investigate alleged reports of harassment and bullying, or else be liable for violating the state’s consumer fraud act.
There are lots of problems to this bill. First, how to define what is and isn’t a social networking website? Social networking is not limited to just Facebook, MySpace or LinkedIn. There are thousands of other sites that have social networking features but aren’t thought of as a pure social network site. Define “social networking” too narrowly, and you may not include these other sites where harassment and bullying can occur. However, define “social networking” broadly and you create burdens and potential liability on many sites (particularly smaller) where there’s no real need for report abuse icons and formal procedures.
The article cites Prof. Eric Goldman at Santa Clara Law School saying that Sect. 230 of the Communications Decency Act would preempt civil lawsuits against websites. But would it preempt state enforcement of the fraud act? I’m not sure.
The article also cites Sam Bayard, assistant director of the Citizen Media Law Project, who says Continue reading →
I’ve got a new essay up over at the
City Journal about John Nichols and Robert McChesney’s proposal to have the government heavily subsidize failing media enterprises to “save journalism.” It follows below:
“Socializing Media in Order to Save It“
by Adam D. Thierer
City Journal March 27, 2009
With proposals to nationalize or heavily subsidize various segments of our economy more in vogue than ever, it was probably only a matter of time before someone suggested that America’s media marketplace should be brought into the government fold. John Nichols of The Nation and the prolific neo-Marxist media theorist Robert W. McChesney have now provided the road map for media’s march to serfdom. The cost to the American taxpayer would be at least $60 billion, but the cost for the First Amendment and our democracy would be incalculable.
Nichols and McChesney have coauthored several books and essays about media policy that view the world through the prism of class struggle, “manufactured consent” (á la Noam Chomsky), and the rest of the typical Marxoid tripe about history and economics. In their view, private, for-profit media cannot be trusted. As they stated in their 2003 call to arms,
Our Media, Not Theirs: The Democratic Struggle Against Corporate Media, media-reform efforts must begin with “the need to promote an understanding of the urgency to assert public control over the media.” “Our claim,” they continue, “is simply that the media system produces vastly less of quality than it would if corporate and commercial pressures were lessened.”
In a new
Nation essay, “The Death and Life of Great American Newspapers,” the authors bring their earlier work to its logical conclusion. Saving journalism, they argue, essentially requires that media become an appendage of the state. Journalism, they claim, is a “public good,” which—like education and defense—requires constant government oversight and support: “A moment has arrived at which we must recognize the need to invest tax dollars to create and maintain news gathering, reporting and writing with the purpose of informing all our citizens.” They propose that government devote $60 billion to “subscription subsidies, postal reforms, youth media and investment in public broadcasting.” Think of it as a “free press ‘infrastructure project,’” they say. “It would keep the press system alive. And it has the added benefit of providing an economic stimulus.” (Isn’t it amazing how everything stimulates the economy these days?)
Continue reading →
Today, it was my great privilege to guest lecture at Princeton University’s Center for Information Technology Policy. Under the leadership of Ed Felten, who also runs the excellent “Freedom to Tinker” blog, the CITP has quickly become one of America’s premier institutions in the field of IT policy matters. David Robinson, who some of you will remember from his days as an editor at The American, serves as associate director of the CITP program and was kind enough to invite me to speak. And our own Tim Lee is currently studying there as well. I wish I was smart enough to get into that program!
The topic of my talk was “The Future of the First Amendment in an Age of Technological Convergence” and I used the opportunity to create a narrated video of this presentation, which I have made to several other groups through the years. In this presentation, I talk about “America’s First Amendment Twilight Zone,” which refers to the fact that identical words and images are being regulated in completely different ways today depending on the mode of transmission. This illogical and unfair situation could eventually threaten the Internet, video games, and all new media with many of the misguided regulations that have long been imposed on broadcast television and radio operators. In my presentation, which you can watch below, I make the case for changing our First Amendment regime to ensure “bit equality”; all speech and media platforms should be accorded the gold standard of First Amendment protection.
http://www.youtube.com/v/xJo3tVMScyI&hl=en&fs=1
Continue reading →
Ars Technica has just posted the transcript of a friendly debate I recently engaged in with Harvard University law professor John Palfrey about the future of Section 230 of the Communications Decency Act and online liability more generally. Our debate got started last fall, shortly after I penned a favorable review of John’s excellent new book (with Urs Gasser), Born Digital: Understanding the First Generation of Digital Natives. [Listen to my podcast with John about it here.] Although I enjoyed John’s book, I also raised some concerns about his call in the book to reopen and revise Section 230, specifically to address child safety concerns. At the time, John and I were working together on the Berkman Center’s “Internet Safety Technical Task Force” and we decided to begin an e-mail exchange about the future of 230 and online liability norms more generally. The result was the debate that Ars has just published.
In our exchange, I begin by asking John to more fully develop some statements and proposals he sets forth in
Born Digital. Specifically, he and co-author Urs Gasser argue that: “The scope of the immunity the CDA provides for online service providers is too broad” and that the law “should not preclude parents from bringing a claim of negligence against [a social networking site] for failing to protect the safety of its users.” They also suggest that “There is no reason why a social network should be protected from liability related to the safety of young people simply because its business operates online.” Specifically, the call for “strengthening private causes of action by clarifying that tort claims may be brought against online service providers when safety is at stake,” although they do not define those instances.
Using those proposals as a launching point for our discussion, I challenge John as follows:
I’m troubled by your proposals because I believe Section 230 has been crucial to the success of the Internet and the robust marketplace of online freedom of speech and expression. In many ways — whether intentional or not — Section 230 was the legal cornerstone that gave rise to many of the online freedoms we enjoy today. I fear that the proposal you have set forth could reverse that. It could lead to crushing liability for many online operators-and not just giants like MySpace or Facebook-that might not be able to absorb the litigation costs. Could you elaborate a bit more about your proposal and explain why you think the time has come to alter Section 230 and online liability norms?
And John does and then we go back-and-forth from there. Again, you can read the whole exchange over at Ars.
It was a great pleasure to engage in this exchange with Prof. Palfrey and I look forward to what others have to say in response to our debate. I am working on a longer paper looking broadly at the rising threats to Sec. 230 and the increasing calls for expanded online liability and middleman deputization. I will use whatever feedback I get from this exchange to refine my paper and proposals.
CNN reports:
An Illinois sheriff filed a federal lawsuit Thursday against the owners of craigslist, accusing the popular national classified-ad Web site of knowingly promoting prostitution.
The sheriff is upset that the site maintains a bulletin board system which is very lightly policed by its creators. It is little more than a forum for people to place their own advertisements. Thus, principles of caveat emptor abound, as anyone who has tried to find an apartment through the service knows.
Without craigslist, back to street walking
More importantly, Craig’s List is perhaps the best example of a site that should be immune from prosecution for the actions of its users under Section 230 of the Communications Decency Act. It exercises little control over what its users do, and that’s what makes the service both valuable and free. If the company had to hire thousands more people to examine every post that comes before it, its service would become more like Apple’s iPhone/iPod Touch App Store.
Section 230 allows websites like Craig’s List, Google, YouTube, Blogger, and pretty much every other user-driven Web 2.0 site the security to know they can operate free of lawsuits about what someone else, their users, did. Adam Thierer goes so far as to argue that it makes possible a real world analog for Nozick’s meta-utopia. Moreover, it is philosophically required by the tenet of justice known as the “principle of intervening action.”
Yet attorneys general and other politicians have been seizing on high-profile internet-related misfortunes like the MySpace suicide to push against Section 230’s safe harbor promise. Adam Thierer recently gave an excellent summary of where the section may be heading in the US. Other countries are even worse.
Perhaps even more dangerous than overt legal erosion of Section 230 through bad precedents (there are still some judicial defenders of the section out there, after all) is its covert destruction through coerced “agreements” forced upon ISPs and websites by AGs. They started popping up all over the place this summer and there is no end in sight. Indeed, CNN pointed out:
Craigslist entered into an agreement with 43 states’ attorneys general in November to enact measures that impose restrictions on its Erotic Services section. The agreement called for the Web site to implement a phone verification system for listings that required ad posters to provide a real telephone number that would be called before the ad went public.
Let’s hope the new administration stops the trend and puts life back into Section 230.
The Federal Communications Commission (FCC) has just released a Notice of Inquiry (NOI) in the matter of “Implementation of the Child Safe Viewing Act; Examination of Parental Control Technologies for Video or Audio Programming.” (MB Docket No. 09-26) This NOI was required by S. 602, the “Child Safe Viewing Act of 2007,” which Congress passed last October and President Bush signed into law on December 2nd. The measure requires the FCC to examine:
(1) the existence and availability of advanced blocking technologies that are compatible with various communications devices or platforms;
(2) methods of encouraging the development, deployment, and use of such technology by parents that do not affect the packaging or pricing of a content provider’s offering; and
(3) the existence, availability, and use of parental empowerment tools and initiatives already in the market.
The Act defines the term “advanced blocking technologies” as “technologies that can improve or enhance the ability of a parent to protect his or her child from any indecent or objectionable video or audio programming, as determined by such parent.” Importantly, the Act also directs the agency to look into blocking technologies that “may be appropriate across a wide variety of distribution platforms, including
wired, wireless, and Internet platforms” and which “operate independently of ratings pre-assigned by the creator of such video or audio programming.” The Act requires that the FCC issue a report to Congress about these technologies no later than August 29, 2009.
When writing about the Child Safe Viewing Act shortly after its introduction in the summer of 2007, I noted that the measure potentially represented the beginning of “convergence-era content regulation” at the FCC. Those two clauses highlighted above are of particular importance in that regard. Congress has essentially invited the FCC to engage in unprecedented oversight of media platforms and ratings systems that the agency previously had very little ability to influence. Continue reading →
Ben Edelman of the Harvard Business School has just released an interesting new study in the Journal of Economic Perspectives entitled, “Red Light States: Who Buys Online Adult Entertainment?” Using data he obtained from a top-10 seller of adult entertainment, Edelman examined adult website subscriptions on the zip code level and found that conservatives seem to be every bit as interested in pornography as liberals. In fact, “Subscriptions [to adult entertainment sites] are slightly more prevalent in states that have enacted conservative legislation on sexuality” and “subscriptions are also more prevalent in states where surveys indicate conservative positions on religion, gender roles, and sexuality.” He also finds that:
In states where more people agree that “Even today miracles are performed by the power of God” and “I never doubt the existence of God,” there are more subscriptions to this service. Subscriptions are also more prevalent in states where more people agree that “I have old-fashioned values about family and marriage” and “AIDS might be God’s punishment for immoral sexual behavior.”
Even more interesting is the fact that, on a state-by-state basis, Utah* residents topped all other Americans in terms of subscriptions to online adult entertainment websites. Finally, Edelman concludes:
On the whole, these adult entertainment subscription patterns show a remarkable consistency: all but eleven states have between two and three subscribers to this service per thousand broadband households, and all but four have between 1.5 and 3.5. With interest in online adult entertainment relatively constant across regions, there’s little sign of a major divide.
But it’s not just Internet porn where we see this trend at work. As I noted in my law review article, “Why Regulate Broadcasting?” we’ve seen a similar trend at work with television. When you look at some of the TV shows that conservatives and religious groups gripe most about, you might be surprised to know that it is conservatives who make those shows as popular as they are!
Continue reading →
Anonymity, Reader Comments & Section 230
by Adam Thierer on April 9, 2009 · 18 comments
Doug Feaver, a former Washington Post reporter and editor, has published a very interesting editorial today entitled “Listening to the Dot-Commenters.” In the piece, Feaver discusses his personal change of heart about “the anonymous, unmoderated, often appallingly inaccurate, sometimes profane, frequently off point and occasionally racist reader comments that washingtonpost.com allows to be published at the end of articles and blogs.” When he worked at the Post, he fought to keep anonymous and unmoderated comments off the WP.com site entirely because it was too difficult to pre-screen them all and “the bigger problem with The Post’s comment policy, many in the newsroom have told me, is that the comments are anonymous. Anonymity is what gives cover to racists, sexists and others to say inappropriate things without having to say who they are.”
But Feaver now believes those anonymous, unmoderated comment have value because:
It seems a bit depressing that the best argument in favor of allowing unmoderated, anonymous comments is that it allows us to see the dark underbelly of mankind, but the good news, Feaver points out, is that:
He goes on to provide some good examples. And he also notes how unmoderated comments let readers provide their heartfelt views on the substance of sensitive issues and let journalists and editorialists know how they feel about what is being reported or how it is being reported. “We journalists need to pay attention to what our readers say, even if we don’t like it,” he argues. “There are things to learn.”
Continue reading →