Ryan is associate director of technology studies at the Competitive Enterprise Institute, where his work focuses on adapting law and policy to the unique challenges of the information age. His research areas include privacy, IP telecommunications, competition policy, and media regulation.
Social widgets, such as the now-ubiquitous Facebook “Like” button and Twitter “Tweet” button, offer users a convenient way to share online content with their friends and followers. These widgets have recently come under scrutiny for their privacy implications. Yesterday, The Wall Street Journal reported that Facebook, Twitter, and Google are informed each time a user visits a webpage that contains one of the respective company’s widgets:
Internet users tap Facebook Inc.’s “Like” and Twitter Inc.’s “Tweet” buttons to share content with friends. But these tools also let their makers collect data about the websites people are visiting. These so-called social widgets, which appear atop stories on news sites or alongside products on retail sites, notify Facebook and Twitter that a person visited those sites even when users don’t click on the buttons, according to a study done for The Wall Street Journal.
It wasn’t exactly a secret that social widgets “phone home.” However, the Journal’s story shed new light on how the firms that offer social widgets handle the data they glean regarding user browsing habits. Facebook and Google reportedly store this data for a limited period of time — two weeks and 90 days, respectively — and, importantly, the data isn’t recorded in a way that can be tied back to a user (unless, of course, the user affirmatively decides to “like” a webpage). Twitter reportedly records browsing data as well, but deletes it “quickly.”
Assuming the companies effectively anonymize the data they glean from their social widgets, privacy-conscious users have little reason to worry. I’m not aware of any evidence that social widget data has been misused or breached. However, as Pete Warden reminded us in an informative O’Reilly Radar essay posted earlier this week, anonymizing data is harder than it sounds, and supposedly “anonymous” data sets have been successfully de-anonymized on several occasions. (For more on the de-anonymization of data sets, see Arvind Narayanan and Vitaly Shmatikov’s 2008 research paper on the topic).
Last November, I penned an essay on these pages about the COICA legislation that had recently been approved unanimously by the U.S. Senate Judiciary Committee. While I praised Congress’s efforts to tackle the problem of “rogue websites” — sites dedicated to trafficking in counterfeit goods and/or distributing copyright infringing content — I warned that the bill lacked crucial safeguards to protect free speech and due process, as several dozen law professors had also cautioned. Thus, I suggested several changes to the legislation that would have limited its scope to truly bad actors while reducing the probability of burdening protected expression through “false positives.” Thanks in part to the efforts of Sen. Ron Wyden (D-Ore.), COICA never made it a floor vote last session.
Today, three U.S. Senators introduced a similar bill, entitled the PROTECT IP Act (bill text), which, like COICA, establishes new mechanisms for combating Internet sites that are “dedicated to infringing activities.” I’m glad to see that lawmakers adopted several of my suggestions, making the PROTECT IP Act a major improvement over its predecessor. While the new bill still contains some potentially serious problems, on net, it represents a more balanced approach to fighting online copyright and trademark infringement while recognizing fundamental civil liberties.
POLITICO reports that a bill aimed at combating so-called “rogue websites” will soon be introduced in the U.S. Senate by Sen. Patrick Leahy. The legislation, entitled the PROTECT IP Act, will substantially resemble COICA (PDF), a bill that was reported unanimously out of the Senate Judiciary Committee late last year but did not reach a floor vote. As more details about the new bill emerge, we’ll likely have much more to say about it here on TLF.
I discussed my concerns about and suggested changes to the COICA legislation here last November; the PROTECT IP Act reportedly contains several new provisions aimed at mitigating concerns about the statute’s breadth and procedural protections. However, as Mike Masnick points out on Techdirt, the new bill — unlike COICA — contains a private right of action, although that right may not permit rights holders to disable infringing domain names. Also unlike COICA, the PROTECT IP Act would apparently require search engines to cease linking to domain names that a court has deemed to be “dedicated to infringing activities.”
For a more in-depth look at this contentious and complex issue, check out the panel discussion that the Competitive Enterprise Institute and TechFreedomhosted last month. Our April 7 event explored the need for, and concerns about, legislative proposals to combat websites that facilitate and engage in unlawful counterfeiting and copyright infringement. The event was moderated by Juliana Gruenwald of National Journal. The panelists included me, Danny McPherson of VeriSign, Tom Sydnor of the Association for Competitive Technology, Dan Castro of the Information Technology & Innovation Foundation, David Sohn of the Center for Democracy & Technology, and Larry Downes of TechFreedom.
User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.
Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.
Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.
It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.
Consumers are buying more and more stuff from online retailers located out-of-state, and state and local governments aren’t happy about it. States argue that this trend has shrunk their brick and mortar sales tax base, causing them to lose out on tax revenues. (While consumers in most states are required by law to annually remit sales taxes for goods and services purchased out of state, few comply with this practically unenforceable rule).
In his latest Forbes.com column, “The Internet Tax Man Cometh,” Adam Thierer argues against this proposed legislation. He points out that while cutting spending should be the top priority of state governments, the dwindling brick and mortar tax base presents a legitimate public policy concern. However, Thierer suggests an alternative to “deputizing” Internet retailers as interstate sales tax collectors:
The best fix might be for states to clarify tax sourcing rules and implement an “origin-based” tax system. Traditional sales taxes are already imposed at the point of sale, or origin. If you buy a book in a Seattle bookstore, the local sales tax rate applies, regardless of where you “consume” it. Why not tax Net sales the same way? Under an origin-based sourcing rule, all sales would be sourced to the principal place of business for the seller and taxed accordingly.
Origin-based taxation is a superb idea, as my CEI colleague Jessica Melugin explained earlier this month in the San Jose Mercury News in an op-ed critiquing California’s proposed affiliate nexus tax:
An origin-based tax regime, based on the vendor’s principal place of business instead of the buyer’s location, will address the problems of the current system and avoid the drawbacks of California’s plan. This keeps politicians accountable to those they tax. Low-tax states will likely enjoy job creation as businesses locate there. An origin-based regime will free all retailers from the accounting burden of reporting to multiple jurisdictions. Buyers will vote with their wallets, “choosing” the tax rate when making decisions about where to shop online and will benefit from downward pressure on sales taxes. Finally, brick-and-mortar retailers would have the “even playing field” they seek.
Congress should exercise its authority over interstate commerce and produce legislation to fundamentally reform sales taxes to an origin-based regime. In the meantime, California legislators should resist the temptation to tax those beyond their borders. Might we suggest an awards show tax?
Like Milton, I’m very worried about the political vulnerabilities that might arise if the wireless sector grows more concentrated. Still, I think it’s a big mistake to legitimize one repressive incarnation of coercive state power (antitrust intervention) to reduce the likelihood that another incarnation (information control) will intensify. This approach is not only defeatist, as Hance argues, but it also requires a tactical assessment that rests on several dubious assumptions.
First, Milton overestimates the marginal risk that the AT&T – T-Mobile deal will pave the way for an information control regime. The wireless market isn’t static; the disappearance of T-Mobile as an independent entity (which may well occur regardless of whether this deal goes through) hardly means we’re forever “doomed” to live with 3 nationwide wireless players. With major spectrum auctions likely on the horizon, and the possibility of existing spectrum holdings being combined in creative ways, the eventual emergence of one or more nationwide wireless competitors is quite possible — especially if, as skeptics of the AT&T – T-Mobile deal often argue, the wireless market underperforms in the years following the acquisition.
More importantly, network operators, like almost all Internet gatekeepers, face mounting pressure from their users not to facilitate censorship, surveillance, and repression. Case in point: AT&T is a leading member of the Digital Due Process coalition (to which I also belong) that’s urging Congress to substantially strengthen the 1986 federal statute that governs law enforcement access to private electronic communications. Consider that AT&T’s position on this major issue is officially at odds with the official position of the same Justice Department that’s currently reviewing the AT&T – T-Mobile deal. Would a docile, subservient network operator challenge its state overseers so publicly?
In the ongoing copyright debates, areas of common ground are seemingly few and far between. It’s easy to forget that not all approaches to combating copyright infringement are mired in controversy. One belief that unites many stakeholders across the spectrum is that more efforts are needed to educate Internet users about copyright. The Internet has spawned legions of amateur content creators, but not all of the content that’s being created is original. Indeed, a great deal of online copyright infringement owes to widespread ignorance of copyright law and its penalties.
For its part, Google yesterday unveiled “Copyright School” for YouTube users. As Justin Green explains on the official YouTube blog, users whose accounts have been suspended for allegedly uploading infringing content will be required to watch this video and then correctly answer questions about it before their account will be reinstated:
Of course, boiling down the basics of copyright into a four and a half minute video is not an easy task, to put it mildly. (The authoritative treatment of copyright law, Nimmer on Copyright, fills an 11-volume treatise.) Copyright geeks and fans of “remix culture” will appreciate that Google’s video touches on fair use and includes links to in-depth resources for users to learn more about copyright. It will be interesting to see how Google’s effort influences the behavior of YouTube users and the incidence of repeat infringement.
The Competitive Enterprise Institute and TechFreedomare hosting a panel discussion this Thursday featuring intellectual property scholars and Internet governance experts. The event will explore the need for, and concerns about, recent legislative proposals to give law enforcement new tools to combat so-called “rogue websites” that facilitate and engage in unlawful counterfeiting and copyright infringement.
Video of the event will be posted here on TechLiberation.com.
“What Should Lawmakers Do About Rogue Websites?” — A CEI/TechFreedom event
Thursday, April 7 (12:00 – 2:00 p.m.)
The National Press Club (529 14th Street NW, Washington D.C.)
Juliana Gruenwald, National Journal (moderator) Daniel Castro, Information Technology & Innovation Foundation Larry Downes, TechFreedom Danny McPherson, VeriSign Ryan Radia, Competitive Enterprise Institute David Sohn, Center for Democracy & Technology Thomas Sydnor, Association for Competitive Technology
Space is very limited. To guarantee a seat, please register for the event by emailing email@example.com.
Juliana Gruenwald, National Journal (moderator)
Daniel Castro, Information Technology & Innovation Foundation
Larry Downes, TechFreedom
Danny McPherson, VeriSign
Ryan Radia, Competitive Enterprise Institute
David Sohn, Center for Democracy & Technology
Thomas Sydnor, Association for Competitive Technology
In the latest example of big government run amok, several politicians think they ought to be in charge of which applications you should be able to install on your smartphone.
On March 22, four U.S. Senators sent a letter to Apple, Google, and Research in Motion urging the companies to disable access to mobile device applications that enable users to locate DUI checkpoints in real time. Unsurprisingly, in their zeal to score political points, the Senators—Harry Reid, Chuck Schumer, Frank Lautenberg, and Tom Udall—got it dead wrong.
Had the Senators done some basic fact-checking before firing off their missive, they would have realized that the apps they targeted actually enhance the effectivenessof DUI checkpoints while reducing their intrusiveness. And had the Senators glanced at the Constitution – you know, that document they swore an oath to support and defend – they would have seen that sobriety checkpoint apps are almost certainly protected by the First Amendment.
While Apple has stayed mum on the issue so far, Research in Motion quickly yanked the apps in question. This is understandable; perhaps RIM doesn’t wish to incur the wrath of powerful politicians who are notorious for making a public spectacle of going after companies that have the temerity to stand up for what is right.
Today, the U.S. District Court for the Southern District of New York rejected a proposed class action settlement agreement between Google, the Authors Guild, and a coalition of publishers. Had it been approved, the settlement would have enabled Google to scan and sell millions of books, including out of print books, without getting explicit permission from the copyright owner. (Back in 2009, I submitted an amicus brief to the court regarding the privacy implications of the settlement agreement, although I didn’t take a position on its overall fairness.)
While the court recognized in its ruling (PDF) that the proposed settlement would “benefit many” by creating a “universal digital library,” it ultimately concluded that the settlement was not “fair, adequate, and reasonable.” The court further concluded that addressing the troubling absence of a market in orphan works is a “matter for Congress,” rather than the courts.
Both chambers of Congress are currently working hard to tackle patent reform and rogue websites. Whatever one thinks about the Google Books settlement, Judge Chin’s ruling today should serve as a wake-up call that orphan works legislation should also be a top priority for lawmakers.
Today, millions of expressive works cannot be enjoyed by the general public because their copyright owners cannot be found, as we’ve frequently pointed out on these pages (1, 2, 3, 4). This amounts to a massive black hole in copyright, severely undermining the public interest. Unfortunately, past efforts in Congress to meaningfully address this dilemma have failed.
In 2006, the U.S. Copyright Office recommended that Congress amend the Copyright Act by adding an exception for the use and reproduction of orphan works contingent on a “reasonably diligent search” for the copyright owner. The proposal also would have required that users of orphan works pay “reasonable compensation” to copyright owners if they emerge.
A similar solution to the orphan works dilemma was put forward by Jerry Brito and Bridget Dooling. They suggested in a 2006 law review article that Congress establish a new affirmative defense in copyright law that would permit a work to be reproduced without authorization if no rightsholder can be found following a reasonable, good-faith search.