“All this top-40s music sounds the same.” I think we’ve all heard this sentiment. The nature of regional radio broadcasting almost requires a regression to the mean in musical tastes. A radio station cannot be all things to all people. I suspect most people will be surprised to learn that some of the most innovative radio broadcasts are taking place at hundreds of stations across the country—and only few people can listen to them. These stations, known as low power FM (LPFM), carry niche programming like independent folk rock music, fishing shows, political news, reggae, blues, and religious programming. (And one station in Sitka, Alaska consists entirely of a live feed of whale sounds.) Continue reading →
Yesterday it was my privilege to speak at a Free State Foundation (FSF) event on “Ideas for Communications Law and Policy Reform in 2013.” It was moderated by my friend and former colleague Randy May, who is president of FSF, and the event featured opening remarks from the always-excellent FCC Commissioner Robert McDowell.
During the panel discussing that followed, I offered my thoughts about the problem America continues to face in cleaning up communications and media law and proposed a few ideas to get reform done right once and for all. I don’t have time to formally write-up my remarks, but I thought I would just post the speech notes that I used yesterday and include links to the relevant supporting materials. (I’ve been using a canned version of this same speech at countless events over the past 15 years. Hopefully lawmakers will take up some of these reforms some time soon so I’m not using this same set of remarks in 2027!)
This Wednesday the Information Economy Project at George Mason University wil present the latest installment of its Tullock Lecture series, featuring Thomas G. Krattenmaker, former director of research at the FCC. Here is the notice:
Thomas G. Krattenmaker
Former Director of Research, FCC
Former Professor of Law, Georgetown University Law Center
Former Dean and Professor, William and Mary Law School
Wednesday, October 17, 2012
The Information Economy Project at George Mason University
proudly presents The Tullock Lecture on Big Ideas About Information
4:00 – 5:30 pm @ Hazel Hall Room 215
GMU School of Law, 3301 Fairfax Drive, Arlington, Va.
(Orange Line: Virginia Square-GMU Metro)
Reception to Follow in the Levy Atrium, 5:30-6:30 pm
In its June 21, 2012 opinion in FCC v. Fox, the Supreme Court vacated reasoned judgments of the Second Circuit, without one sentence questioning the validity or wisdom of those judgments. Although the Court absolved Fox on a technicality, its opinion appears to reflect a post-modern approach to First Amendment jurisprudence concerning broadcast speech, whereby neither precedent nor principle control outcomes. This indulgent approach to a government censorship bureau appears to acquiesce in an unconfined, unprincipled, and unwarranted seizure of regulatory power by the FCC. The Fox opinion thus compounds and enables a grave regulatory failure; whether any sound broadcast indecency policy or legal regime is feasible is perhaps debatable, but the Federal Communications Commission is wholly incapable of administering such a regime. The lecture will be preceded by a short introduction by Fernando Laguarda.
Vinton Cerf, one of the “fathers of the internet,” discusses what he sees as one of the greatest threats to the internet—the encroachment of the United Nations’ International Telecommunications Union (ITU) into the internet realm. ITU member states will meet this December in Dubai to update international telecommunications regulations and consider proposals to regulate the net. Cerf argues that, as the face of telecommunications is changing, the ITU is attempting to justify its continued existence by expanding its mandate to include the internet. Cerf says that the business model of the internet is fundamentally different from that of traditional telecommunications, and as a result, the ITU’s regulatory model will not work. In place of top-down ITU regulation, Cerf suggests that open multi-stakeholder processes and bilateral agreements may be a better solutions to the challenges of governance on the internet.
There are a lot of inaccurate claims – and bad economics – swirling around the Universal Music Group (UMG)/EMI merger, currently under review by the US Federal Trade Commission and the European Commission (and approved by regulators in several other jurisdictions including, most recently, Australia). Regulators and industry watchers should be skeptical of analyses that rely on outmoded antitrust thinking and are out of touch with the real dynamics of the music industry.
The primary claim of critics such as the American Antitrust Institute and Public Knowledge is that this merger would result in an over-concentrated music market and create a “super-major” that could constrain output, raise prices and thwart online distribution channels, thus harming consumers. But this claim, based on a stylized, theoretical economic model, is far too simplistic and ignores the market’s commercial realities, the labels’ self-interest and the merger’s manifest benefits to artists and consumers.
For market concentration to raise serious antitrust issues, products have to be substitutes. This is in fact what critics argue: that if UMG raised prices now it would be undercut by EMI and lose sales, but that if the merger goes through, EMI will no longer constrain UMG’s pricing power. However, the vast majority of EMI’s music is not a substitute for UMG’s. In the real world, there simply isn’t much price competition across music labels or among the artists and songs they distribute. Their catalogs are not interchangeable, and there is so much heterogeneity among consumers and artists (“product differentiation,” in antitrust lingo) that relative prices are a trivial factor in consumption decisions: No one decides to buy more Lady Gaga albums because the Grateful Dead’s are too expensive. The two are not substitutes, and assessing competitive effects as if they are, simply because they are both “popular music,” is not instructive. Continue reading →
The privacy debate has been increasingly shaped by an apparent consensus that de-identifying sets of personally identifying information doesn’t work. In particular, this has led the FTC to abandon the PII/non-PII distinction on the assumption that re-identification is too easy. But a new paper shatters this supposed consensus by rebutting the methodology of Latanya Sweeney’s seminal 1997 study of re-identification risks, which in turn, shaped the HIPAA’s rules for de-identification of health data and the larger privacy debate ever since.
This new critical paper, “The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now” was published by Daniel Barth-Jones, an epidemiologist and statistician at Columbia University. After carefully re-examining the methodology of Sweeney’s 1997 study, he concludes that re-identification attempts will face “far-reaching systemic challenges” that are inherent in the statistical methods used to re-identify. In short, re-identification turns out to be harder than it seemed—so our identity can more easily be obscured in large data sets. This more nuanced story must be understood by privacy law scholars and public policy-makers if they want to realistically assess current privacy risks posed by de-identified data—not just for health data, but for all data.
The importance of Barth-Jones’s paper is underscored by the example of Vioxx, which stayed on the market years longer than it should have because of HIPAA’s privacy rules, thus resulting in 88,000 and 139,000 unnecessary heart attacks, and 27,000-55,000 avoidable deaths—as University of Arizona Law Professor Jane Yakowitz Bambauer explained in a recent Huffington Post piece.
Ultimately, overstating the risk of re-identification causes policymakers to strike the wrong balance in the trade-off of privacy with other competing values. As Barth-Jones and Yakowitz have suggested, policymakers should instead focus on setting standards for proper de-identification of data that are grounded in a rigorous statistical analysis of re-identification risks. A safe harbor for proper de-identification, combined with legal limitations on re-identification, could protect consumers against real privacy harms while still allowing the free flow of data that drives research and innovation throughout the economy.
Unfortunately, the Barth-Jones paper has not received the attention it deserves. So I encourage you consider writing about this, or just take a moment to share this with your friends on Twitter or Facebook.
Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.
Is competition really a problem in the tech industry? That was the question the folks over at WebProNews asked me to come on their show and discuss this week. I offer my thoughts in the following 15-minute clip. Also, down below I have embedded a few of my recent relevant essays on this topic, a few of which I mentioned during the show.
I suppose there’s something to be said for the fact that two days into DirecTV’s shutdown of 17 Viacom programming channels (26 if you count the HD feeds) no congressman, senator or FCC chairman has come forth demanding that DirecTV reinstate them to protect consumers’ “right” to watch SpongeBob SquarePants.
Yes, it’s another one of those dust-ups between studios and cable/satellite companies over the cost of carrying programming. Two weeks ago, DirecTV competitor Dish Network dropped AMC, IFC and WE TV. As with AMC and Dish, Viacom wants a bigger payment—in this case 30 percent more—from DirecTV to carry its channel line-up, which includes Comedy Central, MTV and Nickelodeon. DirecTV, balked, wanting to keep its own prices down. Hence, as of yesterday, those channels are not available pending a resolution.
As I have said in the past, Washington should let both these disputes play out. For starters, despite some consumer complaints, demographics might be in DirecTV’s favor. True, Viacom has some popular channels with popular shows. But they all skew to younger age groups that are turning to their tablets and smartphones for viewing entertainment. At the same time, satellite TV service likely skews toward homeowners, a slightly older demographic. It could be that DirecTV’s research and the math shows dropping Viacom will not cost them too many subscribers.
So, as I write this, I’m watching a House Commerce “Future of Video” hearing and I am trying to figure out if I’m the only person who was alive and watching television in the 1970s. I mean, come on, doesn’t anyone else remember the era of the Big 3 and meager viewing options?! Well, for those who forget, here were some of your TV viewing options this day in history, June 27, 1972. Read it and weep (and then celebrate the cornucopia of viewing riches we enjoy today in a world of over 900 video channels + the Internet).
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →