IGF Day 2: The Coalition

by on October 23, 2013 · 0 comments

As expected, today at 1pm there was a packed, off-the-books meeting facilitated by the “I-star” organizations (ICANN, ISOC, IETF, and a bunch of groups that don’t begin with I). The purpose of the meeting was to build support for a new Internet governance “coalition.” The argument is that because of the NSA’s global surveillance programs, the US is losing support for its perceived leadership on Internet governance. In order to avoid greater governmental or intergovernmental intrusion into the Internet, the technical community, as signaled in the Montevideo statement, must go on the offensive and create an alternative to such intrusion.

This argument is controversial, to say the least. To what extent does the “offensive” entail creating a top-down institution to deal with Internet policy issues? Neither the technical community nor civil society wants government to be in charge of the Internet, but the technical community (especially ICANN) seems much more comfortable with top-down non-governmental control. I worry that ICANN is going to become increasingly government-like. In any case, we are witnessing a small but historic rift between civil society and the technical community, which have always been on the same side in the war to keep governments off the Internet.

Even if ICANN’s argument makes a kind of sense, it may be reckless to pursue it in the proposed way. It’s now looking like there will be a don’t-call-it-a-summit in Rio in early May, hosted by the Brazilian government, to discuss these issues. Even if ICANN has good reason to believe that Brazil is negotiating in good faith, there is always the possibility that Brazil gets what it wants in the end. They are not likely to just roll over.

I’m open to the idea that we need an affirmative answer to the question of Internet policy institutions. But I’d feel a lot more comfortable if such institutions evolved bottom-up rather than emerging from a grand push, organized secretly by some members of the technical community, to create an alternative. Hopefully with the creation of the new coalition mailing list, everything can be done out in the open from here on out.

Day 1 of the Internet Governance Forum is in the books, and everyone is talking about what will happen on Day 2. Brazil recently announced that it will host a meeting on Internet governance in April. Tomorrow, ICANN is hosting a meeting at 1pm to explain how the April meeting will work.

Everyone that I’ve talked to in the hallways has brought up the meeting in April. No one is quite sure what to expect.

On one hand, Brazil has been part of the coalition that is pushing to do more Internet governance at the ITU. On the other hand, ICANN seems to be a willing participant in Brazil’s scheme. The recent “Montevideo Statement,” issued by various Internet organizations, called for globalizing the IANA function, which means at a minimum removing the US’s special role of maintaining the domain name system’s root zone file.

ICANN wants independence from the US government, and Brazil wants ICANN to be independent from the US government (and possibly dependent on the ITU), so this makes them allies for now.

Bizarrely, NSA surveillance continues to be cited as a reason for Brazil’s actions, although of course the IANA function has nothing to do with surveillance. The IANA issue is mostly about status. Other governments seem to feel slighted by the US’s control of the root zone file.

In any case, tomorrow we may know slightly more about ICANN and Brazil’s schemes.

Here’s the video from a recent panel I sat on at the 4th annual Privacy Identity Innovation conference (pii2013) in downtown Seattle on September 17, 2013. The panel was entitled, “Emerging Technologies and the Fine Line between Cool and Creepy,” a topic I have written much about here in recent blog posts as well as in law review articles.  The panel was expertly moderated by the awesome Natalie Fonseca, co-founder and executive producer of the pii2013 event as well as the always excellent Tech Policy Summit. Other panelists included Terence Craig, Co-founder and CEO, PatternBuilders and Co-author, Privacy and Big Data, Jamela Debelak, Technology and Liberty Director, ACLU of Washington, and my friend Larry Downes, Consultant and Author of The Laws of Disruption, among other excellent books. We discussed how to balance out the competing tensions surround new information technologies and stressed the various ways we could alleviate the primary concerns about many of them.

(The video, which is embedded down below, lasts just under 40 minutes. The audio is a little uneven because I was too stupid to keep the microphone close to my mouth. Sorry about that!)

Emerging Technologies and the Fine Line between Cool and Creepy from Privacy Identity Innovation on Vimeo.

Facebook announced some changes to its site today that will make it easier for teen users to share content with not just their friends but also the entire world. (More coverage at The Washington Post here.) No doubt, some privacy advocates will cry foul and rush to policymakers with requests for restrictions. Yet, it’s not clear to me what their case would be. There isn’t any COPPA issue here since we are talking about teens, and they aren’t covered by the law. Moreover, it seems entirely sensible to allow teens to make their voices heard more broadly via Facebook’s platform the same way they can via many other online sites and services. Teens have speech rights, too, after all.

On the other hand, this is another “teachable moment” that parents should take advantage of. When sites (especially larger sites like Facebook) change their policies and make it easier for our kids to share more about themselves and their feelings, that is always a great time to have another chat with them about acceptable online behavior. I’ve spent a lot of time here and elsewhere talking about the importance of “Netiquette,” or proper online etiquette in various social settings and situations. We need to talk to our kids and each other about being more savvy, sensible, respectful, and resilient media consumers and digital citizens. And schools and even governments have a role to play in pushing education and media literacy in pursuit of better “digital citizenship.”

The crucial lesson here — and this certainly has relevance to today’s Facebook announcement — is that we need to constantly be encouraging our kids to think about smarter online hygiene (sensible personal data use) and proper behavior toward others. Continue reading →

The launch of our new site PiracyData.org has predictably stirred up a good debate and I thought I’d chime in with a couple of thoughts. First I’d like to address the assertion by some, including Jeff Eisenach and Daniel Castro, that the point we’re trying to make with our site is that piracy is justified when content is not available legally. Here is Eisenach:

The Mercatus site is headlined by the following question: “Do people turn to piracy when the movies they want to watch are not available legally?” The implication is that piracy of movies that aren’t being offered legally is OK, or at least less bad, than piracy of movies that are currently available.

In both my post announcing the site and the Washington Post article Eisenach links to, as well as other articles about the site, I make it clear that there is no excuse for piracy. Piracy is illegal and wrong and copyright holders should be able to exercise their exclusive rights as they see fit during the term of copyright. I don’t know how much more explicit I can be. That said, although piracy is illegal and wrong, it may still be the case that the legal availability of content has an effect on piracy rates. That is a possibility that we are pointing out, not celebrating.

Second, I’d like to address the assertions by Eisenach and Castro that I am advocating that the movie industry should change its business model to collapse the theatrical release window, and that I think doing this will solve the piracy problem. Here again is Eisenach:

If you believe copyright holders have an obligation to make all content available to everyone all the time (as PiracyData.org seems to suggest), at what price would you require them to offer it?

In my post announcing the site I wrote that “their business model is their prerogative, and it’s none of my business to tell them how to operate,” and that’s something I repeated in other articles where I was quoted. So to be clear, I don’t think movie studios have any obligation to do anything. And I certainly don’t think that shortening their release windows would “eliminate piracy,” as Castro said.

Having addressed what I didn’t say, let me reiterate what I did say. The context for the creation of PiracyData.org was the MPAA report arguing that search engines were not taking sufficient voluntary measures to combat piracy. That study was released on the same day that the House held a hearing on “the role of voluntary agreements in U.S. intellectual property system” at which it was also argued that search engines, and Google in particular, have not done enough to combat piracy. The message from the content industry was, to echo Eisenach, that Google has an obligation to take all possible steps to end piracy. It is the nature of this notional obligation that we wanted to probe with PiracyData.org.

Hopefully I’ve been sufficiently clear that we all think that piracy is illegal and wrong and a problem that the content industry is rightfully up in arms about. So the question that we’re really debating is not whether piracy is right or wrong, but how to enforce copyright. How many resources should be expended, and by whom? That’s what this debate it really about.

Over a year ago, Google changed its algorithm to demote sites in its search results based on the number of copyright complaints those sites have received. An algorithmic change is as deep a change in a search engine’s business as one can expect. The message coming from the MPAA report and the House hearing, however, was that Google’s efforts were not enough, and that they should take further voluntary steps to not only remove infringing links from their search results, but also promote to the top legal sources.

PiracyData.org is never going to show all the ways that availability can affect piracy rates, and I’ve been clear about that both in my launch post and in interviews I’ve given, but I think that simply looking at the availability of the most-pirated movies will help shed some light on the simple question of whether people might turn to piracy when there is no legal version available. As the MPAA report noted, the majority of consumers who found themselves at infringing links “did not display an intention of viewing content illegally.” So the question is, why did these consumers who had no illegal intent end up at infringing sites? If they turned to piracy because they could not find a legal version, that would not justify piracy, but I hope we can all agree that it would be good to know whether it might be happening.

So it seems to me that we have identified two contentions of how piracy might be addressed. One is to have search engines voluntarily take more and more steps to change how they present the Web to users in order to address piracy. The other is that movie studios could shrink the theatrical release window. These are not mutually exclusive, and I think we see both happening. Google, as I mentioned, has already changed its algorithm and taken other measures, and as the MPAA has pointed out, the movie studios are working hard to make their movies available when and where consumers want. The question, therefore, is whether these efforts are enough or not, and what is the best way to enforce copyright.

Without a massive investment in enforcement, the sad reality is that piracy rates will never be zero. So what we should debate is whether additional enforcement efforts are worth the cost. As much as we might not want it to be the case, at some point there are diminishing marginal returns to more enforcement. If we determine that more could be done, then then the question is who should make that investment. Should it be search engines or the movies studios who should consider further changing how they do business?

Now, let me say that I am absolutely sympathetic to content owners who are put in incredibly unfair position of having to compete with piracy. As I said before, under the law they have the exclusive right to determine how they will distribute their works, and it is galling that they might feel forced by pirates to adopt a business model against their wishes. That is not fair to content owners. That said, the fact that they are facing this competition from piracy, as unfair and reprehensible as it is, is a fact that can’t be ignored.

In sum, these are the tough questions we should be discussing, not distractions about whether anyone is condoning piracy or whether anyone is blaming the victim, etc. We were hoping that PiracyData.org would spark that discussion, and boy have we had a big return on our investment! Now we need to make sure that we keep this debate on the serious and nuanced questions, and this is often hard to do over tweets and quotes in articles. So we are thinking of holding an event here at George Mason with all points of view represented to discuss these real questions. Stay tuned for details.

Today, we launched PiracyData.org, a site that takes the top ten most pirated movies of the week and mashes them up with data on legal online availability. Our hope is to build an extensive time-series dataset that can help shed light on the relationship between piracy and viewing options.

As might be expected with a new site, we’ve experienced some launch day glitches with the accuracy of our data and our visitors have thankfully pointed these out. We are of course committed to getting it right, so in the spirit of full transparency, we want to explain exactly what has gone wrong and how we plan on fixing it.

First, let me explain in detail how our site works and the exact data sources that we are using. Every hour, PiracyData.org polls the RSS feed for TorrentFreak’s most pirated movies posts. If the new week’s data is not yet in our database, we add it and fetch each movie’s availability from CanIStream.It.

CanIStream.It is a great site, but it is a little difficult for a computer to read. You can’t look up a movie by IMDB ID, which is pretty much the universal identifier for movies. What you can do, however, is pull up a CanIStream.It widget using IMDB ID.

The widget separates availability into four categories: streaming, rental, purchase, and physical DVDs. Given that this is a discussion of online piracy, we are really only interested in the first three categories, but we preserve all four. We scrape the page for movie availability on all of the services that the widget lists.

Making our site this way has presented us with four distinct issues that we only discovered once we started getting user feedback on the site:

1. Movie availability may change throughout the week

This is actually not a problem with our data, but with how it’s interpreted. Because the TorrentFreak data is backward-looking, reporting the most pirated movies in the previous week, we only want to report the online availability of movies as it appeared on Monday. That is, we are intentionally taking a snapshot of Monday availability. If movies become available for rental on Tuesday, we will continue to report throughout the remainder of the week that they were not available to rent on Monday, because that is most likely to reflect the state of the world during the preceding week when the piracy was happening.

A number of people have noted that Pacific Rim is now available for rental. We haven’t been able to confirm for sure, but we believe that it was added for rental at some point after we checked, and therefore this does not appear to be an error on our part. We’d appreciate it if anyone can confirm this because we want to make sure we are getting the right results.

2. Some services are available on CanIStream.It that are not listed in the widget, only on the main site

In particular, The Lone Ranger is available for rental only from a Sony service, but that service is absent in the CanIStream.It widget for not only The Lone Ranger but for all movies. Originally today, our site reported what the CanIStream.It widget reported, that the movie is not available for rental. However, when it was pointed out to us that CanIStream.It’s main site reports that The Lone Ranger is available on Sony, we updated our data to take account of that. We are going to find a way in the future to ensure that all services are automatically included in our dataset, but this means we may have to find another data source or resort to manual entry.

3. In at least one instance, CanIStream.It returned to us data for the wrong movie.

Here’s how the CanIStream.It widgets work: you go to the base url “http://www.canistream.it/external/imdb/” and add the IMDB ID for the movie you are querying. For example, since Pacific Rim’s ID is tt1663662, you can see the widget for the movie at http://www.canistream.it/external/imdb/tt1663662 .

This works perfectly most times, but bizarrely, it doesn’t work for This Is the End, whose IMDB is tt1245492. When you visit http://www.canistream.it/external/imdb/tt1245492 you get the CanIStream.It widget for Jay and Seth Vs. the Apocalypse, not This Is the End. As an outlier, this caught us totally by surprise, and we updated the data on our site to reflect the accurate data from This Is the End. Again, this is the kind of bug we could only have caught once we had lots of eyes on the site and we’re grateful for the feedback.

4. The site is built using the best available data.

TorrentFreak and CanIStream.It offer extremely useful data to the public. While we’ve had some issues incorporating the CanIStream.It data, we are grateful for the data they provide. CanIStream.It’s data is typically seen even among industry insiders as reliable. For instance, MPAA’s site wheretowatch.org directs their users to CanIStream.It as a source.

That said, if we want to build the canonical dataset on this issue, we have to do better. We need to make sure that there are no glitches. We would like to work with anyone with access to availability data to make sure that we can compile the most accurate data possible.

We’re not exactly sure what this entails yet. We may have to get availability data directly from the services themselves. If we can secure the cooperation of the services—for example if they would be willing to supply data on the date that each movie by IMDB number became available on their service—we could even compute availability data historically. TorrentFreak has data on pirated movies going back to 2006.

One thing is for certain: the dataset that we are proposing to build is important. We have provoked quite a reaction from people on both sides of this issue. We acknowledge that it has been a bumpy launch for our site, but we are committed to getting it right. We ask for everybody’s patience and good-faith assistance as we try to get there.

Today, Eli Dourado, Matt Sherman, and I launched PiracyData.org, a very simple site that tries to help answer the question, are the most-pirated movies each week available for legal streaming, digital rental, or digital purchase? We do this by mashing TorrentFreak’s weekly top-ten list of the most pirated movies on BitTorrent with Can I Stream It’s database of movie availability. The result if a single-page website that visualizes the results, as well as a downloadable dataset that will grow each week.

The idea for the site came to me last month when RIAA president Cary Sherman was testifying before Congress at a hearing on what further voluntary steps search engines could take to combat piracy. That same day, the MPAA had released a study that found that users who found themselves at URLs for infringing content had been “influenced” by search engines. This was reported in the press as “search engines lead to piracy.” The gist from the study and Sherman’s testimony was that search engines, and in particular Google, were not doing enough to address the fact that for some searches the top results include links to infringing content, and the implication, of course, is that if Google didn’t take voluntary action, perhaps Congress should require it to.

At the time I blogged an analysis of the MPAA study and noted that, according to the report, 58% of all visits to infringing URLs that were “influenced” by a search engine came from queries for either generic or title-based terms, not from the more-clearly suspicious “domain” terms. As the report remarked, this “indicat[es] that these consumers did not display an intention of viewing content illegally.” As I wrote at the time:

So the question is, why did these consumers who had no illegal intent end up at infringing sites? Could it be that they did not have a legal alternative to accessing the content they were seeking? That would not excuse their behavior, and it’s the movie industry’s prerogative whether and when to make their content available. Indeed release windows are part of its business model, although a business model seemingly in tension with consumer demand as evidenced by the shrinking theatrical release window. That all said, it’s not clear to me why search engines should be in the business of ensuring other industries’s business models remain unchanged.

After I wrote that it occurred to me that we could begin to collect data to answer that question, and so I asked Eli and Matt if they wanted to help me build the site. The initial answer the site is generating seems to be that very few are available legally.

To be clear, we only have three weeks of data so far, and we’ll get a better picture in the months ahead as the dataset grows. Additionally, proving the adage that given enough eyeballs all bugs are shallow, we’ve been alerted to the fact that a couple of the movies we were listing as unavailable this week are in fact available. Looking at the problem we found that although we were querying the correct IMDB ID for the movies, Can I Stream It was giving us back the wrong data. We’ve fixed the problem and updated the results. This is all to say that the site will prove its value a year from now when we have a substantial dataset.

That said, one implication of the early results may be that when movies are unavailable, illegal sources are the most relevant search results, so search engines like Google are just telling it like it is. That is their job, after all.

Also, while there is no way to draw causality between the fact that these movies are not available legally and that they are the most pirated, it does highlight that while the MPAA is asking Google to take voluntary action to change search results, it may well be within the movie studio’s power to change those results by taking voluntary action themselves. That is, they could make more movies available online and sooner, perhaps by collapsing the theatrical release window. Now, their business model is their prerogative, and it’s none of my business to tell them how to operate, but by the same token I I don’t see how they can expect search engines and Congress to bend over backwards to protect the business model they choose.

As we continue to debate what are the responsibilities of different actors in the Internet ecosystem related to piracy, we hope PiracyData.org will provide useful context.

Oh man, I could not stop laughing at this old “Kids Guide to the Internet” video from the 90s. My thanks to my former colleague Amy Smorodin for tweeting it out today. I just had to post it here so that everyone could enjoy.

(Note: You can turn this video into a great drinking game. Just make everyone in the room raise their glass each time the lines “Does your computer have a modem?” and “Not all that cybernet stuff, OK?” are uttered.) And yes, as the opening line of the video notes, “the first thing you need to know about the Internet is that it is amazing.”

Michelle Quinn of Politico was kind enough to call me a few days ago and ask for comment for her story about “California Driving Internet Privacy Policy.” Quinn’s article offers an excellent overview of how the Golden State is gradually taking on a greater regulatory role for the Net, at least as it pertains to matters of online privacy. She opens by noting that:

With the federal government and technology policy shut down in Washington, California is steaming ahead with a series of online privacy laws that will have broad implications for Internet companies and consumers.In recent weeks, Democratic Gov. Jerry Brown has signed a litany of privacy-related legislation, including measures to create an “eraser button” for teens, outlaw online “revenge porn” and make Internet companies explain how they respond to consumer Do Not Track requests. The burst of activity is another sign that the Golden State — home to Google, Facebook and many of the world’s largest tech companies — is setting the agenda for Internet regulation at a time when the White House and Congress are moving at a much more glacial pace.

When she asked me how I felt about this, I noted that: “California seems like it is willing to declare the Internet its own private fiefdom and rule it with its own privacy fist.”  And, no matter how well intentioned any of these new California policies may be, the ends most certainly do not justify the means. Continue reading →

It could be argued that the exact match between the DISH bid commitment and the H block reserve price is purely coincidental. To actually believe this was a coincidence would require the same willing suspension of disbelief indulged by summer moviegoers who enjoy the physics-defying stunts enabled by computer-generated special effects. When moviegoers leave the theater after watching the latest Superman flick, they don’t actually believe they can fly home.

The FCC’s Wireless Bureau recently adopted an unusually high $1.564 billion reserve price for the auction of the H block spectrum. Though the FCC has authorized the Bureau to adopt reserve prices based on its consideration of “relevant factors that could reasonably have an impact on valuation of the spectrum being auctioned,” it appears the Bureau exceeded its delegated authority in this proceeding by considering factors unrelated to the value of the H block spectrum that have the effect of giving a particular firm an advantage in the auction. Specifically, the Bureau considered the value to DISH Network Corporation of amendments to FCC rules governing other spectrum bands already licensed to DISH (e.g., the 700 MHz E block) in exchange for DISH’s commitment to meet the $1.564 billion reserve price in the H block auction – a commitment that is contingent on the FCC Commissioners amending rules governing multiple spectrum bands no later than Friday, December 13, 2013.

No matter what the FCC Commissioners decide, if the reserve price stands, the only sure winner would be DISH. If the FCC Commissioners don’t endorse the DISH deal, DISH need not honor its commitment to meet the artificially inflated reserve price, which could result in the spectrum auction’s total failure. If the Commissioners do endorse the DISH deal, the artificially inflated reserve price could deter the participation of other bidders and lower auction revenues that are expected to fund the national public safety network. Neither option would result in an open and transparent auction designed to provide all potential bidders with a fair opportunity to participate.

The FCC would be the only sure loser. The appearance of impropriety in the H block proceeding could compromise public trust in the integrity of FCC spectrum auctions. To ensure the public trust is maintained, the FCC Commissioners should thoroughly review the processes and procedures implemented by the Wireless Bureau in this proceeding before auctioning the H block spectrum.

The following discussion provides background information on the purposes of spectrum auctions and reserve prices. This background information is followed by a more detailed analysis of the terms of the DISH deal and the advantages it would bestow on DISH, the lack of analysis in the Wireless Bureau’s order, the role of the Commissioners, and the potential damage to the integrity of FCC auctions. Continue reading →