Today, the U.S. District Court for the Southern District of New York rejected a proposed class action settlement agreement between Google, the Authors Guild, and a coalition of publishers. Had it been approved, the settlement would have enabled Google to scan and sell millions of books, including out of print books, without getting explicit permission from the copyright owner. (Back in 2009, I submitted an amicus brief to the court regarding the privacy implications of the settlement agreement, although I didn’t take a position on its overall fairness.)

While the court recognized in its ruling (PDF) that the proposed settlement would “benefit many” by creating a “universal digital library,” it ultimately concluded that the settlement was not “fair, adequate, and reasonable.” The court further concluded that addressing the troubling absence of a market in orphan works is a “matter for Congress,” rather than the courts.

Both chambers of Congress are currently working hard to tackle patent reform and rogue websites. Whatever one thinks about the Google Books settlement, Judge Chin’s ruling today should serve as a wake-up call that orphan works legislation should also be a top priority for lawmakers.

Today, millions of expressive works cannot be enjoyed by the general public because their copyright owners cannot be found, as we’ve frequently pointed out on these pages (1, 2, 3, 4). This amounts to a massive black hole in copyright, severely undermining the public interest. Unfortunately, past efforts in Congress to meaningfully address this dilemma have failed.

In 2006, the U.S. Copyright Office recommended that Congress amend the Copyright Act by adding an exception for the use and reproduction of orphan works contingent on a “reasonably diligent search” for the copyright owner. The proposal also would have required that users of orphan works pay “reasonable compensation” to copyright owners if they emerge.

A similar solution to the orphan works dilemma was put forward by Jerry Brito and Bridget Dooling. They suggested in a 2006 law review article that Congress establish a new affirmative defense in copyright law that would permit a work to be reproduced without authorization if no rightsholder can be found following a reasonable, good-faith search.

Jane Yakowitz of Brooklyn Law School recently posted an interesting 63-page paper on SSRN entitled, “Tragedy of the Data Commons.” For those following the current privacy debates, it is must reading since it points out a simple truism: increased data privacy regulation could result in the diminution of many beneficial information flows.

Cutting against the grain of modern privacy scholarship, Yakowitz argues that “The stakes for data privacy have reached a new high water mark, but the consequences are not what they seem. We are at great risk not of privacy threats, but of information obstruction.” (p. 58)  Her concern is that “if taken to the extreme, data privacy can also make discourse anemic and shallow by removing from it relevant and readily attainable facts.” (p. 63)  In particular, she worries that “The bulk of privacy scholarship has had the deleterious effect of exacerbating public distrust in research data.”

Yakowitz is right to be concerned. Access to data and broad data sets that include anonymized profiles of individuals is profound importantly for countless sectors and professions: journalism, medicine, economics, law, criminology, political science, environmental sciences, and many, many others. Yakowitz does a brilliant job documenting the many “fruits of the data commons” by showing how “the benefits flowing from the data commons are indirect but bountiful.” (p. 5) This isn’t about those sectors making money. It’s more about how researchers in those fields use information to improve the world around us. In essence, more data = more knowledge. If we want to study and better understand the world around us, researchers need access to broad (and continuously refreshed) data sets. Overly restrictive privacy regulations or forms of liability could slow that flow, diminish or weaken research capabilities and output, and leave society less well off because of the resulting ignorance we face. Continue reading →

On this week’s podcast, Patri Friedman, executive director and chairman of the board of The Seasteading Institute, discusses seasteading. Friedman discusses how and why his organization works to enable floating ocean cities that will allow people to test new ideas for government. He talks about advantages of starting new systems of governments in lieu of trying to change existing ones, comparing seasteading to tech start-ups that are ideally positioned to challenge entrenched companies. Friedman also suggests when such experimental communities might become viable and talks about a few inspirations behind his “vision of multiple floating Hong Kongs”: intentional communities, Burning Man, and Ephemerisle.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

Five years ago this month, I penned a white paper on “Fact and Fiction in the Debate over Video Game Regulation” that I have been meaning to update ever since but just never seem to get around to it. One of the myths I aimed to debunk in the paper was the belief that most video games contain intense depictions of violence or sexuality.  This argument drives many of the crusades to regulate video games. In my old study, I aggregated several years worth of data about video game ratings and showed that the exact opposite was the case: the majority of games sold each year were rating “E” for everyone or “E10+” (Everyone 10 and over) by the Entertainment Software Rating Board (ESRB).

Thanks to this new article by Ars Technica‘s Ben Kuchera, we know that this trend continues. Kuchera reports that out of 1,638 games rated by the ESRB in 2010, only 5% were rated “M” for Mature. As a percentage of top sellers, the percentage of “M”-rated games is a bit higher, coming in at 29%. But that’s hardly surprising since there are always a few big “M”-rated titles that are the power-sellers among young adults each year.  Still, most of the best sellers don’t contain extreme violence or sexuality.

Continue reading →

Here are some quick thoughts on the proposed AT&T – T-Mobile merger, mostly borrowed from my previous writing on the wireless marketplace. First, however, I highly recommend this excellent analysis of the issue by Larry Downes, which cuts through the hysteria we’re already hearing and offers a sober look at the issues at stake here.  Anyway, here are a few of my random thoughts on the deal:

  • The deal will likely be approved: First, to cut to the chase.. After much wrangling, the deal will probably be approved primarily because of two factors, both of which help political officials as much as AT&T: (1) The deal delivers upon the National Broadband Plan promise of getting the country blanketed with wireless broadband; and (2) it “brings home” T-Mobile by giving an American company control of a German-held interest. As Larry Dignan of ZNet says, it is tantamount to “playing the patriotism card.”

  • One reason it might not be approved: Some Administration critics, especially from the more liberal part of the Democratic base, could make this a litmus test for Obama administration’s antitrust enforcement efforts. In the wake of the Comcast merger approval — albeit after several pounds of flesh were handed over “voluntarily” to get the deal approved — some of the Administration’s base will be looking for blood. I remember how the Powell FCC was under real heat to “get tough” on mergers back in 2001-02 and during that time blocked the proposed DirecTV-EchoStar deal, possibly as a result of the pressure. The same thing could happen to AT&T – T-Mobile here.

  • It’s all about spectrum: From AT&T’s perspective, this deal is all about getting more high-quality spectrum, which is in increasingly short supply. Indeed, as Jerry Brito noted earlier, this merger should serve as another wake-up call regarding the need to get spectrum reform going again to ensure that existing players can reallocate their spectrum to those who demand it most. (Hint: Incentivize the TV broadcasters to sell... NOW!) But, in the short-term, this deal helps AT&T built out a more robust nationwide wireless network. Over the long-haul, that should help T-Mobile deliver better service to its customers. Continue reading →

Many folks will no doubt be writing a lot about the competitive issues surrounding the announced AT&T/T-Mobile merger, so instead I thought I’d weigh in on what I know best: spectrum.

To the extent you’re worried about the concentration of the wireless market, you should really be concerned about the government policies that make entry and expansion so difficult.

First, if a carrier wants to acquire more spectrum to meet consumer demand for new services, it can’t thanks to the artificial scarcity created by federal policies that dedicate vast swaths of the most valuable spectrum to broadcast television and likely inefficient government uses. It’s gratifying to see the FCC now confronting the “spectrum crunch,” but waiting for a deal to be brokered on incentive auctions is a luxury carriers don’t have. So, buying a competitor might be the only way left for them to acquire more spectrum.

Second, if a carrier wants to put up a new tower, or add antennas to existing towers, it has to get permission from the local zoning board. This can be an extremely onerous process as different localities will have different reasons to hold up the process. Buying a competitor is therefore also an obvious way to get access to more towers.

Again, I’m not sure this merger will have a negative effect on competition. Many high sunk costs industries are perfectly competitive with just two or three players. (I’m look forward to a good analysis on that question, perhaps from our own Geoff Manne of Josh Wright.) What I do know is that if you are worried about competition, antitrust policy is not going to solve the long-term issue of artificial scarcity, which is the real problem here.

Entry is possible. In fact, a new entrant in the wireless market is waiting in the wings in the form of the cable industry with the spectrum they acquired in the AWS auction. Before they can start offering services, however, they must move incumbent users of the bands they acquired. There is also Clearwire, part owned by Comcast, Time Warner, and Google–serious competitors to the Bells.

If we really got serious about reallocating broadcast and inefficiently used federal spectrum, we might not have to worry competition. We’d likely see new entry, and access to spectrum would be less of a reason to acquire a competitor.

In the rush of ink that flowed yesterday over AT&T’s announced merger with T-Mobile USA, I posted a long piece on CNET calling for calm, reasoned analysis of the deal by regulators, chiefly the Department of Justice and the FCC.

Since the details of the deal have yet to be fleshed out, it’s hard to say much about the specifics of how customers will be affected in the short or long term. My CNET colleague Maggie Reardon, however, does an excellent job laying out both the technical and likely regulatory issues in a piece posted today from the CTIA conference. Continue reading →

I guess the search for market failure in the privacy area is interesting to me. I wrote about it the other week too. It’s nice that those who prefer regulation feel obligated to justify that preference. It’s acknowledgment of the fact, increasingly well-accepted worldwide, that functioning free markets do a better job of discovering and satisfying consumers’ interests than any other method for organizing societies’ resources.

A recent market failure blog post called “Privacy and the Market for Lemons, or How Websites Are Like Used Cars,” seems to have piqued Adam’s interest. (See the comments.) In it, privacy and anonymity researcher Arvind Narayanan makes the case for privacy market failure. (Evidently, it’s an argument that others have made before.)

“In the realm of online privacy and data collection,” he says, “information asymmetry results from a serious lack of transparency around privacy policies. The website or service provider knows what happens to data that’s collected, but the user generally doesn’t.” Several economic, architectural, cognitive and regulatory limitations/flaws “have led to a well-documented market failure—there’s an arms race to use all means possible to entice users to give up more information, as well as to collect it passively through ever-more intrusive means.”

Alas, there’s no link at “well-documented.” I would like to see that documentation. But more importantly, what Narayanan appears to be speaking of as market failure—an arms race to get more information from Web users—is not one. That’s market action that Narayanan doesn’t like.

So where’s the market failure? Continue reading →

Today, the U.S. Senate Commerce Committee held a hearing on “The State of Online Consumer Privacy.”

The push for online privacy regulation has real momentum, as proposed privacy legislation from numerous lawmakers, a Department of Commerce report proposing a compulsory Do Not Track mechanism to regulate business marketing practices, and the Obama Administration’s proposed “Privacy Bill of Rights” all indicate.

However, Congress should be very wary of such proposals. A politically defined Do Not Track regime risks undermining targeted advertising, impeding business transactions that occur between strangers, and stifling mobile ecosystems that are barely out of the cradle. Rattling consumers needlessly by encouraging them to opt-out of largely beneficial information collection is an especially unwise idea in our uncertain economic climate – especially when major industry participants are developing such mechanisms on their own.

The opportunity to undermine online marketing – wrongly called “surveillance” – appeals to some, but such privacy purists have no right to call the shots for anyone but themselves and those who agree with them. The right to use information acquired through voluntary transactions is no less important than the right to decide whether to disclose information in the first place.

Continue reading →

My thanks to Linton Weeks of NPR who reached out to me for comment for a story he was doing on the impact of the Internet and digital technology on culture and our attention spans. His essay, “We Are Just Not Digging The Whole Anymore,” is an interesting exploration of the issue, although it is clear that Weeks, like Nick Carr (among others), is concerned about what the Net is doing to our brains. He says:

We just don’t do whole things anymore. We don’t read complete books — just excerpts. We don’t listen to whole CDs — just samplings. We don’t sit through whole baseball games — just a few innings. Don’t even write whole sentences. Or read whole stories like this one. We care more about the parts and less about the entire. We are into snippets and smidgens and clips and tweets. We are not only a fragmented society, but a fragment society. And the result: What we gain is the knowledge — or the illusion of knowledge — of many new, different and variegated aspects of life. What we lose is still being understood.

After reading the entire piece I realized that some of my comments to Weeks probably came off as a bit more pessimistic about things than I actually am. I told him, for example, that “Long-form reading, listening and viewing habits are giving way to browse-and-choose consumption,” and that “With the increase in the number of media options — or distractions, depending on how you look at them — something has to give, and that something is our attention span.”

Luckily, however, Weeks was kind enough to also give me the last word in the story in which I pointed out that it would be a serious mistake to conclude “that we’re all growing stupid, or losing our ability to think, or losing our appreciation of books, albums or other types of long-form content.” Instead, I argued: Continue reading →