Posts tagged as:

As mentioned here before, PFF has been rolling out a new series of essays examining proposals that would have the government play a greater role in sustaining struggling media enterprises, “saving journalism,” or promoting more “public interest” content. We’re releasing these as we get ready to submit a big filing in the FCC’s “Future of Media” proceeding (deadline is May 7th).  Here’s a podcast Berin Szoka and I did providing an overview of the series and what the FCC is up to.

In the first installment of the series, Berin Szoka and I critiqued an old idea that’s suddenly gained new currency: taxing media devices or distribution systems to fund media content. In the second installment, I took a hard look at proposals to impose fees on broadcast spectrum licenses and channeling the proceeds to a “public square channel” or some other type of public media or “public interest” content. The third installment dealt with proposals to steer citizens toward “hard news” and get them to financially support it through the use of “news vouchers” or “public interest vouchers.”

In our latest essay, “The Wrong Way to Reinvent Media, Part 4: Expanding Postal Subsidies,” Berin and I argue that expanding postal subsidies won’t likely do much to help failing media enterprises, will raise the risk of greater meddling by politicians with the press, and can’t be absorbed by the Postal Service without a significant increase in cost for ratepayers or taxpayers.  The entire essay is attached down below.

Continue reading →

Google has just launched a new tool that lets users view the total number of requests received “from government agencies around the world to remove content from our services, or provide information about users of our services and products.” As the FAQ explains, the tool overlays the requests received over the last six months, except for countries like China that prohibit the release of such numbers, on a map with totals for both data requests if over 30 (criminal-related but not civil) and removal requests if over 10 (not including requests from private parties, like DMCA copyright take-down notices). Google makes a few important observations about the data—especially that Brazil and India’s numbers are skewed way off because of the popularity of Orkut, Google’s answer to Facebook, there.

This tool represents the beginning of a new era in transparency into how governments censor the Internet and violate users’ privacy. I very much look forward to seeing Google improve this tool to provide greater granularity of disclosure, and to seeing other companies improve upon what Google has started. Over time, this transparency could do wonders to advance Internet freedom for users by promoting positive competition among countries.

To illustrate the kinds of things one could do with this data with a more robust interface, I put together the following spreadsheet (by scraping Google’s request numbers and mashing them up with total Internet users numbers I found here (which are mostly from late 2009):

Continue reading →

Google v. Everyone

by on March 23, 2010 · 9 comments

I had a long interview this morning with the Christian Science Monitor. Like many of the interviews I’ve had this year, the subject was Google. At the increasingly congested intersection of technology and the law, Google seems to be involved in most of the accidents.

Just to name a few of the more recent pileups, consider the Google books deal, net neutrality and the National Broadband Plan, Viacom’s lawsuit against YouTube for copyright infringement, Google’s very public battle with the nation of China, today’s ruling from the European Court of Justice regarding trademarks, adwords, and counterfeit goods, the convictions of Google executives in Italy over a user-posted video, and the reaction of privacy advocates to the less-than-immaculate conception of Buzz.

In some ways, it should come as no surprise to Google’s legal counsel that the company is involved in increasingly serious matters of regulation and litigation. After all, Google’s corporate goal is the collection, analysis, and distribution of as much of the world’s information as possible, or, as the company puts it,” to organize the world’s information and make it universally accessible and useful.” That’s a goal it has been wildly successful at in its brief history, whether you measure success by use (91 million searches a day) or market capitalization ($174 billion).

As the world’s economy moves from one based on physical goods to one driven by information flow, the mismatch between industrial law and information behavior has become acute, and Google finds itself a frequent proxy in the conflicts.

Continue reading →

Just a heads up that on my weekly tech policy podcast, Surprisingly Free Conversations, we’ve just posted an interview with Ethan Zuckerman of Harvard’s Berkman Center for Internet & Society. He recently published an excellent blog post on the limits to internet censorship circumvention technologies, and that’s the topic of our discussion. Ethan writes,

So here’s a provocation: We can’t circumvent our way around internet censorship.

I don’t mean that internet censorship circumvention systems don’t work. They do – our research tested several popular circumvention tools in censored nations and discovered that most can retrieve blocked content from behind the Chinese firewall or a similar system. (There are problems with privacy, data leakage, the rendering of certain types of content, and particularly with usability and performance, but the systems can circumvent censorship.) What I mean is this – we couldn’t afford to scale today’s existing circumvention tools to “liberate” all of China’s internet users even if they all wanted to be liberated.

You can listed to this episode here, and you can subscribe to the show on iTunes or RSS.

In interviews last week and this week (see KUOW’s “The Conversation”), I argue that the convictions of three Google executives by an Italian court for “illegal handling of personal data” threaten the future of all hosted content.  More than that, I said that the convictions had a disturbing subtext:  the on-going effort of the Italian government to intimidate the remaining media outlets in that country it doesn’t already control.  (See “Larger Threat is Seen in Google Case” by the New York Times’ Rachel Donadio for the details.)

In Italy and other countries (think of the Twitter revolt following dubious elections in Iran), TCP/IP is quickly becoming the last bastion of a truly free press.   In that sense, the objectionable nature of the video in question made Google an easy target for a prosecutor who wanted to give the appearance of defending human dignity rather than threatening a free press.

In a post that was picked up on Saturday by TechMeme, I explained my position in detail:

The case involved a video uploaded to Google Videos (before the acquisition of YouTube) that showed the bullying of a person with disabilities.

Internet commentators were up-in-arms about the conviction, which can’t possibly be reconciled with European law or common sense.  The convictions won’t survive appeals, and the government knows that as well as anyone.  They neither want to or intend to win this case.  If they did, it would mean the end of the Internet in Italy, if nothing else. Still, the case is worth worrying about, for reasons I’ll make clear in a moment.

But let’s consider the merits of the prosecution. Prosecutors bring criminal actions because they want to change behavior—behavior of the defendant and, more important given the limited resources of the government, others like him.  What behavior did the government want to change here? Continue reading →

This morning at the Newseum in Washington, DC, U.S. Secretary of State Hillary Rodham Clinton delivered remarks on Internet freedom and the future of global free speech and expression. [Transcript is here + video.] It will go down as a historic speech in the field of Internet policy since she drew a bold line in the cyber-sand regarding exactly where the United States stands on global online freedom. Clinton’s answer was unequivocal: “Both the American people and nations that censor the Internet should understand that our government is committed to helping promote Internet freedom.” “The Internet can serve as a great equalizer,” she argued. “By providing people with access to knowledge and potential markets, networks can create opportunities where none exist.”

Unfortunately, however, “the same networks that help organize movements for freedom… can also be hijacked by governments to crush dissent and deny human rights.”  Echoing Winston Churchill’s famous “iron curtain” speech, Sec. Clinton argued that “With the spread of these restrictive practices, a new information curtain is descending across much of the world.”  She noted that virtual walls are replacing traditional walls in many nations as repressive regimes seek to squash the liberties of their citizenry.  That’s why the Administration’s bold stand in favor of online freedom is so essential.

Importantly, Sec. Clinton made it clear that the Obama Administration is ready to commit significant resources to this effort. She said that, over the next year, the State Department plans to work with others to establish a standing effort to promote technology and will invite technologists to help advance the cause through a new “innovation competition” that will promote circumvention technologies and other technologies of freedom. Sec. Clinton also challenged private companies to stand up to censorship globally and challenge foreign governments when they demand controls on the free flow of information or digital technology.

That is particularly important because Secretary Clinton’s speech comes on the heels of the recent news that Google and at least 30 other Internet companies were the victims of cyberattacks in China, which raises profound questions about the future of online freedom and cybersecurity. Sec. Clinton’s remarks will make it clear to online operators that the U.S. government stands prepared to back them up when they challenge the censorial policies of repressive foreign regimes.

Continue reading →

Over at Mashable, Ben Parr has a post (“Facebook Turns to the Crowd to Eradicate Offensive Content“) expressing surprise that Facebook has a crowdsourcing / community policing solution to deal with objectionable content:

Did you know that Facebook has a crack team of employees whose mission is to deal with offensive content and user complaints? Their ranks number in the hundreds. But while most websites have people on staff to deal with porn and violence, none of them have 350 million users to manage… Now the world’s largest social network found a way to deal with this shortage of manpower, though. Facebook has begun testing a new feature called the Facebook Community Council [currently invite-only]. According to a guest post on the Boing Boing blog by one of the council’s members, its goal is to purge Facebook of nudity, drugs, violence, and spam.

The Facebook Community Council is actually a Facebook app and tool for evaluating content for various offenses… The app’s tagging system allows council members to tag content with one of eight phrases: Spam, Acceptable, Not English, Skip, Nudity, Drugs, Attacking, and Violence. If enough council members tag a piece of content with the same tag, action is taken, often a takedown.

What Facebook is doing here is nothing all that new.  Many other social networking sites or platforms such MySpace, Ning, and many others, do much the same. Video hosting sites like YouTube do as well. [See my summary of YouTube’s efforts down below]**

No doubt, some will be quick to decry “private censorship” with moves by social networking sites, video hosting sites, and others to flag and remove objectionable content within their communities, but such critics need to understand that:
Continue reading →

While I was away at Oxford University last week, a USA Today story ran entitled “Online Hate Speech: Difficult to Police… and Define.”  The author, Theresa Howard, was kind enough to call me for comment on the issue before I left and I made two general points in response to her questions about how serious online hate speech was and how we should combat it:

(1) “The Internet is a cultural bazaar. It’s the place to find the best and worst of all human elements on display.” What I meant by that, quite obviously, is that you can’t expect to have the most open, accessible communications platform the world has ever known and not also have a handful of knuckleheads who use it spew vile, hateful, ridiculous comments. But we need to put things in perspective: Those jerks represent only a very, very small minority of all online speech and speakers. Hate speech is not the norm online.  The overwhelmingly majority of  online speech is of a socially acceptable — even beneficial — nature.

(2) “When advocacy groups work together and use the new technology at their disposal, they have a way of signaling out bad speech and bad ideas.” What I meant by that was that the best way to combat the handful of neanderthals out there that spew hateful garbage is to: (a) use positive speech to drown out hateful speech, and (b) encourage websites to self-police themselves or use community policing techniques to highlight hateful speech and encourage the community to fight back.  Importantly, this process is reinforcing.  When online communities “flag and tag” objectionable or hateful content, it is easier for better site policing to occur, for social norms to develop, and for better speech to be targeted at that bad speech.  Moreover, these new tools and methods are helping groups like the Anti-Defamation League and the National Hispanic Media Coalition to better identify hate speech and then channel their collective energy and efforts to unite the rest of the online community against those hateful speakers and sites.

I think this approach makes more sense than calling in governments to police online hate speech through censorship efforts. This is especially the case because, at the margins, “hate speech” can often be tricky to define and, at least in the United States, regulatory efforts could conflict with legitimate free speech rights. Again, the best way to deal with and marginalize such knuckleheads is with more and better speech.  Fight stupidity with sensibility, not censorship.

“Schools in Beijing are quietly removing the Green Dam filter, which was required for all school computers in July, due to complaints over problems with the software,” notes this Reuters report. Even though China backed down on their earlier requirement to have the Green Dam filter installed on all computers, according to Reuters “schools were still ordered by the Ministry of Industry and Information Technology to install the web filter, which Chinese officials said would block pornography and other unhealthy content.”  The Reuters article mentions a notice carried on the home page of one Beijing high school that reads: “We will remove all Green Dam software from computers in the school as it has strong conflicts with teaching software we need for normal work.”  The article also cites a school technology director, who confirmed that the software had been taken off most computers, as saying “It has seriously influenced our normal work.”

Ironically, many educators and librarians in the United States can sympathize since they currently live under similar requirements.  Under the Children’s Internet Protection Act (CIPA) of 2000, publicly funded schools and libraries must implement a mandatory filtering scheme or run the risk of losing their funding. As the Federal Communications Commission summarizes:

[CIPA] imposes certain types of requirements on any school or library that receives funding for Internet access or internal connections from the E-rate program… Schools and libraries subject to CIPA may not receive the discounts offered by the E-rate program unless they certify that they have an Internet safety policy and technology protection measures in place. An Internet safety policy must include technology protection measures to block or filter Internet access to pictures that are: (a) are obscene, (b) child pornography, or (c) harmful to minors (for computers that are accessed by minors).

Of course, nobody wants kids viewing porn in schools, but CIPA has been know the block far more than that and has become a real pain for many educators, librarians, and school administrators who have to occasionally get around these filters to teach their students about legitimate subjects. Anyway, I just find it ironic that some American lawmakers have been making a beef about mandatory Internet filtering by the Chinese when we have our own mandatory filtering regime right here in the states. For example, Continue reading →

Peter Griffin v. FCC

by on August 14, 2009 · 9 comments

Cartoons speak truth to power on censorship: