In interviews last week and this week (see KUOW’s “The Conversation”), I argue that the convictions of three Google executives by an Italian court for “illegal handling of personal data” threaten the future of all hosted content. More than that, I said that the convictions had a disturbing subtext: the on-going effort of the Italian government to intimidate the remaining media outlets in that country it doesn’t already control. (See “Larger Threat is Seen in Google Case” by the New York Times’ Rachel Donadio for the details.)
In Italy and other countries (think of the Twitter revolt following dubious elections in Iran), TCP/IP is quickly becoming the last bastion of a truly free press. In that sense, the objectionable nature of the video in question made Google an easy target for a prosecutor who wanted to give the appearance of defending human dignity rather than threatening a free press.
In a post that was picked up on Saturday by TechMeme, I explained my position in detail:
The case involved a video uploaded to Google Videos (before the acquisition of YouTube) that showed the bullying of a person with disabilities.
Internet commentators were up-in-arms about the conviction, which can’t possibly be reconciled with European law or common sense. The convictions won’t survive appeals, and the government knows that as well as anyone. They neither want to or intend to win this case. If they did, it would mean the end of the Internet in Italy, if nothing else. Still, the case is worth worrying about, for reasons I’ll make clear in a moment.
But let’s consider the merits of the prosecution. Prosecutors bring criminal actions because they want to change behavior—behavior of the defendant and, more important given the limited resources of the government, others like him. What behavior did the government want to change here?
The video was posted by a third party. Within a few months, the Italian government reported to Google their belief that it violated the privacy rights of the bullying victim, and Google took it down. They cooperated in helping the government identify who had posted it, which in turn led to the bullies themselves.
The only thing the company did not do was to screen the video before posting it. The Google executives convicted in absentia had no personal involvement in the video. They are being sued for what they did not do, and did not do personally.
So if the prosecution stands, it leads to a new rule for third-party content: to avoid criminal liability, company executives must personally ensure that no hosted content violates the rights of any third party.
In the future, the only thing employees of Internet hosting services of all kinds could do to avoid criminal prosecution would be to pre-screen all user content before putting it on their website. And pre-screen them for what? Any possible violation of any possible rights. So not only would they have to review the contents with an eye toward the laws of every possible jurisdiction, but they would also need to obtain releases from everyone involved, and to ensure those releases were legally binding. For starters.
It’s unlikely that such filtering could be done in an automated fashion. It is true that YouTube, for example, filters user postings for copyright violations, but that is only because the copyright holders give them reference files that can be compared. The only instruction this conviction communicates to service providers is “don’t violate any rights.” You can’t filter for that!
The prosecutor’s position in this case is that criminal liability is strict—that is, that it attaches even to third parties who do nothing beyond hosting the content.
If that were the rule, there would of course be no Internet as we know it. No company could possibly afford to take that level of precaution, particularly not for a service that is largely or entirely free to users. The alternative is to risk prison for any and all employees of the company.
(The Google execs got sentences of six months in prison each, but they won’t serve them no matter how the case comes out. In Italy, sentences of less than three years are automatically suspended.)
And of course that isn’t the rule. Both the U.S. and the E.U. wisely grant immunity to services that simply host user content, whether it’s videos, photos, blogs, websites, ads, reviews, or comments. That immunity has been settled law in the U.S. since 1996 and the E.U. since 2000. Without that immunity, we simply wouldn’t have–for better or worse–YouTube, Flickr, MySpace, Twitter, Facebook, Craigslist, eBay, blogs, user reviews, comments on articles or other postings, feedback, etc.
(The immunity law, as I write in Law Five of “The Laws of Disruption,” is one of the best examples of the kind of regulating that encourages rather than interferes with emerging technologies and the new forms of interaction they enable.)
Once a hosting service becomes aware of a possible infringement of rights, to preserve immunity most jurisdictions require a reasonable investigation and (assuming there is merit to the complaint), removal of the offending content. That, for example, is the “notice and takedown” regime in the U.S. for content that violates copyright.
The government in this case knows the rule as well as anyone. This prosecution is entirely cynical—the government neither wants to nor intends to win on appeal. It was brought to give the appearance of doing something in response to the disturbing contents of the video (the actual perpetrators and the actual poster have already been dealt with). Google in this sense is an easy target, and a safe one in that the company will vigorously fight the convictions until the madness ends.
And not unrelated, it underscores a message the Italian government has been sending any way it can to those forms of media it doesn’t already control—that it will use whatever means at its disposal, including the courts, to intimidate sources it can’t yet regulate.
So in the end it isn’t a case about liability on the Internet so much as a case about the power of new media to challenge governments that aren’t especially interested in free speech.
Internet pundits are right to be outraged and disturbed by the audacious behavior of the government. But they should be more concerned about what this case says about freedom of the press in Italy and less what it says about the future of liability for content hosts.
And what it says about the Internet as a powerful, emerging form of communication that can’t easily be intimidated.
Not surprising, I received a lot of feedback on the post. In particular, several writers objected to my characterization of the convictions as sending any signal at all. The judge in the case has yet to publish his findings of fact or his analysis of the law, they argued, so it is premature to say whether the conviction has any implication beyond the need to respond quickly (how quickly?) to takedown requests.
It is true that since the trial details have so far been kept private, it’s unclear what findings the judge made as to how soon Google (the company, by the way, not the executives, who had nothing to do with handling of the video) was informed of the objectionable nature of the video, and by whom, how and when they responded, and how many viewings the video had before being removed.
But those details are irrelevant, as succinctly explained by EFF’s International Outreach Coordinator Danny O’Brien in a piece published on February 27th. Aside from a non sequitur about Net Neutrality at the end, O’Brien’s article makes a number of excellent points. In particular, he writes:
The court dismissed the allegation of criminal defamation but upheld a charge of illegally handling personal data on the basis that a video is personal data, and that under EU data protection law, Google needed prior authority before distributing that personal data.
This interpretation of the law means that Google is co-responsible for the legality of content containing the images of persons — before anyone has complained about the content. That effectively means to comply with the decision, any intermediary working within Italy must now pre-screen every piece of video with anyone who appears within it, or risk prosecution. As the judgment stands, it also presents such a wide definition of personal data that it might effectively require that all hosts pre-screen all content be it video, text, audio or data.
(emphasis added)
It is indeed hard to see how the contents of a video can satisfy the definition of personally-identifiable information. Absent explicitly identifying tags or other meta-data associated with uploaded videos that named the individuals, or the location of the incident being filmed, etc., only those who already know the people in a video could identify them by watching it. If the contents of raw video are themselves the private information of everyone who appears in them, it’s clear as O’Brien says that there’s no practical way any video or photo-sharing service can maintain its immunity under U.S. or E.U. law.
And that’s game over for the sensible rule that content hosts have any immunity at all. Even if social network and other content-hosting services could somehow hire enough people to pre-screen everything before posting it, what exactly would they be trained to screen for? Even the most innocuous content could violate privacy laws in Europe if the participants haven’t signed proper releases, and those are nowhere to be found in the videos themselves.
Several commentators have speculated that anything more than instant removal is unacceptable and should erase the presumption of immunity. (See, for example, John Naughton of The Observer.) (“And then there’s that awkward matter of the two months it took to take down the video.”)
Even if the length of time was relevant to this case, that too is a dangerous road to travel. One can think of lots of reasons why companies should be skeptical when asked to remove content by a third party—or even a government entity. Content that misrepresents, defames, or otherwise violates legal rights is actionable, of course, against the individual committing the violation. But sometimes the truth hurts, too, and services that host content need some breathing space to investigate complaints to ensure they are legitimate.
Under U.S., law, for example, content hosts must respond “expeditiously” to takedown notices regarding content that infringes a valid copyright or lose their immunity. (Section 512(c) of the Digital Millennium Copyright Act.) But as the EFF points out, there have been numerous cases of abuse of the takedown provision to suppress information that some third party doesn’t like or to claim copyright violations that aren’t violations at all. Under Section 512(f), in fact, takedown requests that are not made in good faith can themselves lead to damages against the person making the request. “Expeditiously” is not defined.
Which is all to say we want to encourage content hosts to investigate objections to content and not blindly and instantly remove it as soon as someone—anyone—complains.