Copyright

User-driven websites — also known as online intermediaries — frequently come under fire for disabling user content due to bogus or illegitimate takedown notices. Facebook is at the center of the latest controversy involving a bogus takedown notice. On Thursday morning, the social networking site disabled Ars Technica’s page after receiving a DMCA takedown notice alleging the page contained copyright infringing material. While details about the claim remain unclear, given that Facebook restored Ars’s page yesterday evening, it’s a safe bet that the takedown notice was without merit.

Understandably, Ars Technica wasn’t exactly pleased that its Facebook page — one of its top sources of incoming traffic — was shut down for seemingly no good reason. Ars was particularly disappointed by how Facebook handled the situation. In an article posted yesterday (and updated throughout the day), Ars co-founder Ken Fisher and senior editor Jacqui Cheng chronicled their struggle in getting Facebook to simply discuss the situation with them and allow Ars to respond to the takedown notice.

Facebook took hours to respond to Ars’s initial inquiry, and didn’t provide a copy of takedown notice until the following day. Several other major tech websites, including ReadWriteWeb and TheNextWeb, also covered the issue, noting that Ars Technica is the latest in a series of websites to have suffered from their Facebook page being wrongly disabled. In a follow-up article posted today, Ars elaborated on what happened and offered some tips to Facebook on how it could have better handled the situation.

It’s totally fair to criticize how Facebook deals with content takedown requests. Ars is right that the company could certainly do a much better job of handling the process, and Facebook will hopefully re-evaluate its procedures in light of this widely publicized snafu. In calling out Facebook’s flawed approach to dealing with takedown requests, however, Ars Technica doesn’t do justice to the larger, more fundamental problem of bogus takedown notices.

Continue reading →

When it comes to information control, everybody has a pet issue and everyone will be disappointed when law can’t resolve it. I was reminded of this truism while reading a provocative blog post yesterday by computer scientist Ben Adida entitled “(Your) Information Wants to be Free.” Adida’s essay touches upon an issue I have been writing about here a lot lately: the complexity of information control — especially in the context of individual privacy. [See my essays on “Privacy as an Information Control Regime: The Challenges Ahead,” “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars,” and this recent FTC filing.]

In his essay, Adida observes that:

In 1984, Stewart Brand famously said that information wants to be free. John Perry Barlow reiterated it in the early 90s, and added “Information Replicates into the Cracks of Possibility.” When this idea was applied to online music sharing, it was cool in a “fight the man!” kind of way. Unfortunately, information replication doesn’t discriminate: your personal data, credit cards and medical problems alike, also want to be free. Keeping it secret is really, really hard.

Quite right. We’ve been debating the complexities of information control in the Internet policy arena for the last 20 years and I think we can all now safely conclude that information control is hugely challenging regardless of the sort of information in question. As I’ll note below, that doesn’t mean control is impossible, but the relative difficulty of slowing or stopping information flows of all varieties has increased exponentially in recent years.

But Adida’s more interesting point is the one about the selective morality at play in debates over information control. That is, people generally expect or favor information freedom in some arenas, but then get pretty upset when they can’t crack down on information flows elsewhere. Indeed, some people can get downright religious about the whole “information-wants-to-be-free” thing in some cases and then, without missing a beat, turn around and talk like information totalitarians in the next breath. Continue reading →

In the ongoing copyright debates, areas of common ground are seemingly few and far between. It’s easy to forget that not all approaches to combating copyright infringement are mired in controversy. One belief that unites many stakeholders across the spectrum is that more efforts are needed to educate Internet users about copyright. The Internet has spawned legions of amateur content creators, but not all of the content that’s being created is original. Indeed, a great deal of online copyright infringement owes to widespread ignorance of copyright law and its penalties.

For its part, Google yesterday unveiled “Copyright School” for YouTube users. As Justin Green explains on the official YouTube blog, users whose accounts have been suspended for allegedly uploading infringing content will be required to watch this video and then correctly answer questions about it before their account will be reinstated:

Of course, boiling down the basics of copyright into a four and a half minute video is not an easy task, to put it mildly. (The authoritative treatment of copyright law, Nimmer on Copyright, fills an 11-volume treatise.) Copyright geeks and fans of “remix culture” will appreciate that Google’s video touches on fair use and includes links to in-depth resources for users to learn more about copyright. It will be interesting to see how Google’s effort influences the behavior of YouTube users and the incidence of repeat infringement.

Continue reading →

Reputation oils the gears of many markets. People’s expressions of opinion about goods and services help establish the reputations of sellers and service providers. Knowing that they are the subject of reputation systems that they do not control, service providers do a better job on average than they otherwise would. Slacking off even once can sully a reputation and produce well-placed economic sanctions: people won’t do business. Withdrawing reputation information from the public sphere will generally slow the process of winnowing bad actors out of any market and rewarding most highly the good ones. Commercial opinion is a little engine of positive externalities.

Federal privacy regulations under the Health Insurance Portability and Accountability Act shaped the information terms in health care services in ways people are right to disagree with. So it might be tempting to trade away one’s right to criticize a doctor for greater privacy protection. But a new site called DoctoredReviews.com argues against that bargain—indeed, it argues the bargain is illusory—and it criticizes the use of copyright law to enforce the deal.

Apparently, a group called Medical Justice is offering doctors a form contract to give to patients that holds out greater privacy protection for the patient if the patient will refrain from criticizing the doctor. That’s a deal people should be free to make, though—again—it’s probably a bad one. One way that the deal is enforced is by giving the doctor a copyright in the expressions of opinion that patients may issue. This gives the doctor a right to issue “take-down” notices to web sites where content critical of them is found.

This peculiar use of copyright takes the virtuous cycle where a patient talking about an experience with a doctor benefits others, and doesn’t just nip it—bringing it back to zero. It places enforcement costs on third parties. The enforcement of copyrights in commentary pushes negative externalities onto web site operators as it deprives markets of useful information.

The DoctoredOpinions site has a good, concise explanation of the law as it relates to website owners. I think copyright has some explaining to do—its distinction from rights in physical property is in high relief—if its enforcement can draw disinterested and uninvolved third parties into an administrative/litigation vortex.

The Competitive Enterprise Institute and TechFreedom are hosting a panel discussion this Thursday featuring intellectual property scholars and Internet governance experts. The event will explore the need for, and concerns about, recent legislative proposals to give law enforcement new tools to combat so-called “rogue websites” that facilitate and engage in unlawful counterfeiting and copyright infringement.

Video of the event will be posted here on TechLiberation.com.

What: “What Should Lawmakers Do About Rogue Websites?” — A CEI/TechFreedom event
When: Thursday, April 7 (12:00 – 2:00 p.m.)
Where: The National Press Club (529 14th Street NW, Washington D.C.)
Who: Juliana Gruenwald, National Journal (moderator)

Daniel Castro, Information Technology & Innovation Foundation

Larry Downes, TechFreedom

Danny McPherson, VeriSign

Ryan Radia, Competitive Enterprise Institute

David Sohn, Center for Democracy & Technology

Thomas Sydnor, Association for Competitive Technology

 

Space is very limited. To guarantee a seat, please register for the event by emailing nciandella@cei.org.

  • Juliana Gruenwald, National Journal (moderator)
  • Daniel Castro, Information Technology & Innovation Foundation
  • Larry Downes, TechFreedom
  • Danny McPherson, VeriSign
  • Ryan Radia, Competitive Enterprise Institute
  • David Sohn, Center for Democracy & Technology
  • Thomas Sydnor, Association for Competitive Technology
  • Early in President Obama’s term it became clear that efforts to close the revolving door between industry and government weren’t serious or the very least weren’t working.  For a quick refresher on this, check out this ABC news story from August of 2009, which shows how Mr. Obama exempted several officials from rules he claimed would “close the revolving door that lets lobbyists come into government freely” and use their power and position “to promote their own interests over the interests of the American people whom they serve.”

    The latest example of this rapidly turning revolving door is covered expertly by Nate Anderson at Ars Technica:

    Last week, Washington, DC federal judge Beryl Howell ruled on three mass file-sharing lawsuits. Judges inTexasWest Virginia, and Illinois had all ruled recently that such lawsuits were defective in various ways, but Howell gave her cases the green light; attorneys could use the federal courts to sue thousands of people at once and then issue mass subpoenas to Internet providers. Yes, issues of “joinder” and “jurisdiction” would no doubt arise later, but the initial mass unmasking of alleged file-swappers was legitimate.

    Howell isn’t the only judge to believe this, but her important ruling is especially interesting because of Howell’s previous work: lobbying for the recording industry during the time period when the RIAA was engaged in its own campaign of mass lawsuits against individuals.

    The bolding above is my own and is meant to underscore an overarching problem in government today of which Judge Howell is just one example. In a government that is expected to regulate nearly every commercial activity imaginable, it should be no surprise that a prime recruiting ground for experts on those subjects are the very industries being regulated.

    The English language is public domain (the language itself, not everything said with it). So it’s worthless, right? No dollars change hands when people use it. Perhaps it could be made worth something if someone were to own it. The owner could charge a license fee to people who use English, making substantial revenue on this suddenly valuable language.

    Congress can take works in the public domain and make intellectual property of them according to the Tenth Circuit Court of Appeals in a case that approved Congress “restoring” public domain works to copyrighted status. (The case is Golan v. Holder, and the Supreme Court has granted certiorari.)

    But would we really be better off if the English language were given a dollar value through the mechanism of ownership and licensing? No. What is now a costless positive-externality machine would turn into a profit-center for one lucky owner. The society would not be better off, just that owner. If we had to pay for a language, we would regard that as a cost.

    In a similar vein, Mike Masnick at TechDirt indulges the somewhat tongue-in-cheek observation that Microsoft costs the world economy $500 billion by accumulating to itself that would have gone to other things. It’s a sort of Broken Window fallacy for intellectual property: the idea that creating ownership of intellectual goods creates value. What is not seen when intellectual property is withheld from the public domain is the unpaid uses that might have been made of it.

    Now, Microsoft has reaped wonderful benefits from its intellectual creations because it has bestowed wonderful benefits on societies across the globe. But might it have provided all these benefits for slightly less reward, leaving more money with consumers for their preferred uses?

    This is all a way of challenging the mental habit of assuming that dollars are equal to value. In the area of intellectual property (whether or not protected by federal statutes), things that have no effect on the economy (because they’re in the public domain) may have huge value. Things privately owned because of intellectual property law may have less value than they should, even though their owners collect lots of money.

    Today, the U.S. District Court for the Southern District of New York rejected a proposed class action settlement agreement between Google, the Authors Guild, and a coalition of publishers. Had it been approved, the settlement would have enabled Google to scan and sell millions of books, including out of print books, without getting explicit permission from the copyright owner. (Back in 2009, I submitted an amicus brief to the court regarding the privacy implications of the settlement agreement, although I didn’t take a position on its overall fairness.)

    While the court recognized in its ruling (PDF) that the proposed settlement would “benefit many” by creating a “universal digital library,” it ultimately concluded that the settlement was not “fair, adequate, and reasonable.” The court further concluded that addressing the troubling absence of a market in orphan works is a “matter for Congress,” rather than the courts.

    Both chambers of Congress are currently working hard to tackle patent reform and rogue websites. Whatever one thinks about the Google Books settlement, Judge Chin’s ruling today should serve as a wake-up call that orphan works legislation should also be a top priority for lawmakers.

    Today, millions of expressive works cannot be enjoyed by the general public because their copyright owners cannot be found, as we’ve frequently pointed out on these pages (1, 2, 3, 4). This amounts to a massive black hole in copyright, severely undermining the public interest. Unfortunately, past efforts in Congress to meaningfully address this dilemma have failed.

    In 2006, the U.S. Copyright Office recommended that Congress amend the Copyright Act by adding an exception for the use and reproduction of orphan works contingent on a “reasonably diligent search” for the copyright owner. The proposal also would have required that users of orphan works pay “reasonable compensation” to copyright owners if they emerge.

    A similar solution to the orphan works dilemma was put forward by Jerry Brito and Bridget Dooling. They suggested in a 2006 law review article that Congress establish a new affirmative defense in copyright law that would permit a work to be reproduced without authorization if no rightsholder can be found following a reasonable, good-faith search.

    It seems peculiar to me that some of the same individuals and groups who so vociferously opposed a “broadcast flag” technological mandate in past years are now in a mad rush to have federal policymakers mandate a “Do Not Track” regulatory regime for privacy purposes. The broadcast flag debate, you will recall, centered around the wisdom of mandating a technological fix to the copyright arms race before digitized high-definition broadcast signals were effectively “Napster-ized.” At least that was the fear six or seven years ago. TV broadcasters and some content companies wanted the Federal Communications Commission (FCC) to recognize and enforce a string of code that would have been embedded in digital broadcast program signals such that mass redistribution of video programming could have been prevented.

    Flash forward to the present debate about mandating a “Do Not Track” scheme to help protect privacy online. As I noted in my filing last week to the Federal Trade Commission, at root, Do Not Track is just another “information control regime.” Much like the broadcast flag proposal, it’s an attempt to use a technological quick-fix to solve a complex problem. When it comes to such information control efforts, however, there aren’t many good examples of simple fixes or silver-bullet solutions that have worked, at least not for very long. The debates over Wikileaks, online porn, Internet hate speech, and Spam all demonstrate how challenging it can be to put information back into the bottle once it is released into the digital wild.

    To be clear, I am not opposed to technological solutions like broadcast flag or Do Not Track, but I am opposed to forcing them upon the Internet and digital markets in a top-down, centrally-planned fashion. While I am skeptical that either scheme would work well in practice (whether voluntary or mandated), my concern in these debates is that forcing such solutions by law will have many unintended consequences, not the least of which will be the gradual growth of invasive cyberspace controls in these or other contexts. After all, if we can have “broadcast flags” and “Do Not Track” schemes, why not “flag” mandates for objectionable speech or “Do Not Porn” browser mandates? Continue reading →

    Video is now available for all of the excellent programming at this year’s State of The Net 2011 conference. (Programming will also be available over time on C-SPAN’s video library.) The Conference, organized by the Advisory Committee to the Congressional Internet Caucus, featured Members of Congress, leading academics, Administration, agency, and Congressional staff and other provocateurs. Topics this year ranged from social networking, Wikileaks, COICA, copyright, privacy, security, broadband policy and, of course, the end-of-the-year vote by the FCC to approve new rules for network management by broadband providers, aka net neutrality. Continue reading →