Posts tagged as:

I’ve been working closely with PFF Adjunct Fellow & former ICANN Board member Michael D. Palage on ICANN issues.  Michael had this to say about the ongoing saga of ICANN’s attempt to create new gTLDs.

During the recent ICANN Board meeting in Mexico City, the Board authorized the creation and funding of an Implementation Recommendation Team (IRT).  This team was to be comprised of “an internationally diverse group of persons with knowledge, expertise, and experience in the fields of trademark, consumer protection, or competition law, and the interplay of trademarks and the domain name system to develop and propose solutions to the overarching issue of trademark protection in connection with the introduction of new gTLDs.” This IRT is tasked to produce a report for consideration by the ICANN community at the Sydney meeting.

The IRT consists of 24 members:

  • Chairwoman Caroline G. Chicoine; and
  • Seventeen members; and
  • Six ex officio members:  Four IPC-elected officers and two-GNSO elected Board Directors (Bruce Tonkin and Rita Rodin Johnston).  

I have a number of friends and colleagues serving on this team and I wish them well in their important endeavor.

I’ve previously proposed a number of rights-protection mechanisms that IRT should consider.  Today, I offer a few suggestions that I hope will guide IRT as they embark on their important work tomorrow.  In particular, I hope they’ll implement some of my suggestions intended to make the IRT process more transparent-so the rest of the global Internet can follow along with their important work and provide constructive input where possible.

Continue reading →

Today, it was my great privilege to guest lecture at Princeton University’s Center for Information Technology Policy. Under the leadership of Ed Felten, who also runs the excellent “Freedom to Tinker” blog, the CITP has quickly become one of America’s premier institutions in the field of IT policy matters. David Robinson, who some of you will remember from his days as an editor at The American, serves as associate director of the CITP program and was kind enough to invite me to speak.  And our own Tim Lee is currently studying there as well.  I wish I was smart enough to get into that program!

The topic of my talk was “The Future of the First Amendment in an Age of Technological Convergence” and I used the opportunity to create a narrated video of this presentation, which I have made to several other groups through the years. In this presentation, I talk about “America’s First Amendment Twilight Zone,” which refers to the fact that identical words and images are being regulated in completely different ways today depending on the mode of transmission. This illogical and unfair situation could eventually threaten the Internet, video games, and all new media with many of the misguided regulations that have long been imposed on broadcast television and radio operators. In my presentation, which you can watch below, I make the case for changing our First Amendment regime to ensure “bit equality”; all speech and media platforms should be accorded the gold standard of First Amendment protection.

http://www.youtube.com/v/xJo3tVMScyI&hl=en&fs=1

Continue reading →

The Federal Communications Commission (FCC) has just released a Notice of Inquiry (NOI) in the matter of “Implementation of the Child Safe Viewing Act; Examination of Parental Control Technologies for Video or Audio Programming.” (MB Docket No. 09-26)  This NOI was required by S. 602, the “Child Safe Viewing Act of 2007,” which Congress passed last October and President Bush signed into law on December 2nd.  The measure requires the FCC to examine:

(1) the existence and availability of advanced blocking technologies that are compatible with various communications devices or platforms; (2) methods of encouraging the development, deployment, and use of such technology by parents that do not affect the packaging or pricing of a content provider’s offering; and (3) the existence, availability, and use of parental empowerment tools and initiatives already in the market.

The Act defines the term “advanced blocking technologies” as “technologies that can improve or enhance the ability of a parent to protect his or her child from any indecent or objectionable video or audio programming, as determined by such parent.”  Importantly, the Act also directs the agency to look into blocking technologies that “may be appropriate across a wide variety of distribution platforms, including wired, wireless, and Internet platforms” and which “operate independently of ratings pre-assigned by the creator of such video or audio programming.”   The Act requires that the FCC issue a report to Congress about these technologies no later than August 29, 2009.

When writing about the Child Safe Viewing Act shortly after its introduction in the summer of 2007, I noted that the measure potentially represented the beginning of “convergence-era content regulation” at the FCC.  Those two clauses highlighted above are of particular importance in that regard.  Congress has essentially invited the FCC to engage in unprecedented oversight of media platforms and ratings systems that the agency previously had very little ability to influence.  Continue reading →

I’ve been catching up on Radio Berkman, the podcast produced by our friends at the Berkman Center for Internet & Society and a great companion to the TLF’s own Tech Policy Weekly Podcast.  There’s been a lot of talk about government transparency on the TLF lately, including TPW 40: Obama, e-Government & Transparency.  But that conversation has been mainly focused on how to make “public” records accessible.

The most recent Radio Berkman episode, “Can you Keep a Secret?” explores the thorny questions about what should be deemed public in the first place, and what should be classified:

The government keeps secrets. We take that for granted. But should we? Some speculate that intelligence agencies and elected officials are a little bit trigger happy with the “Top Secret” stamp, and that society would benefit from greater openness. With the government classifying millions of pages of documents per year – in a recent year the U.S. classified about five times the number of pages added to the Library of Congress – a great deal of useful human knowledge gets put under lock and key. But some argue that secrecy is still crucial to our national security. Radio Berkman pokes its head into a recent talkback with the directors of the film  Secrecy, Harvard University professors Peter Galison and Robb Moss. They are joined by Harvard Law School professors Jonathan ZittrainMartha Minow, and Jack Goldsmith.

I look forward to seeing the film (when it comes out on Netflix).  

What I found most interesting was the discussion of the essential trade-off in the relationship between the media and the state has always been between the media’s “independence” and its “responsibility” (~33:30 in).  Even the staunchest critics of the national security state would probably accept that there are some stories in the media shouldn’t publish because they’d jeopardize the safety of Americans.  But we all want the media to blow the whistle on the bad stuff that goes on behind a veil of secrecy.  Drawing that line is a terribly difficult task.  But it becomes even more complicated with the decline of traditional professional investigative journalism and the rise of blog/amateur journalism.   Continue reading →

I’ve got a new PFF paper out today entitled, “Who Needs Parental Controls? Assessing the Relevant Market for Parental Control Technologies.” In this piece, I address the argument made by some media and Internet critics who say that government intervention (perhaps even censorship) may be necessary because parental control technologies are not widely utilized by most Americans. But, as I note in the paper, the question that these critics always fail to ask is: How many homes really need parental control technologies? The answer: Far fewer than you think. Indeed, the relevant universe of potential parental control users is actually quite limited.

I find that the percentage of homes that might need parental control technologies is certainly no greater than the 32% of U.S. households with children in them. Moreover, the relevant universe of potential parental control users is likely much less than that because households with very young children or older teens often have little need for parental control technologies. Finally, some households do not utilize parental control technologies because they rely on alternative methods of controlling media content and access in the home, such as household media rules. Consequently, policymakers should not premise regulatory proposals upon the limited overall “take-up” rate for parental control tools since only a small percentage of homes might actually need or want them.

If you don’t care to read the whole nerdy thing, I’ve created this short video summarizing the major findings of the paper.

http://www.youtube.com/v/a7Fnf3Ztt-U&hl=en&fs=1

And the document is embedded below the fold in a Scribd reader. Continue reading →

Acting FCC Chairman Michael Copps declared yesterday in a speech celebrating the 75th anniversary of the FCC and the Communications Act, that it was time to think “more rigorously” about the impact of the migration of communications to the Internet and “how to ensure that as the Internet becomes our primary vehicle for communicating with one another, it protects the public interest and informs the civic dialogue that America depends on.”

“In the beginning was the Word,” said John Something-or-other.  Well, the word here is “public interest” and—make no mistake about it—this is the beginning of a wholesale attempt to impose the regulatory regime of the broadcast era onto the Internet.

As Adam Thierer has pointed out, the “public interest” is really no standard at all—just so much hot air.

A classic piece here by Farhad Manjoo of Slate about how “the Internet of 1996 is almost unrecognizable compared with what we have today.”  It’s a fun look back at just how far the Internet has come over the past 13 years.  I love this passage:

We all know that the Internet has changed radically since the ’90s, but there’s something dizzying about going back to look at how people spent their time 13 years ago. Sifting through old Web pages today is a bit like playing video games from the 1970s; the fun is in considering how awesome people thought they were, despite all that was missing. In 1996, just 20 million American adults had access to the Internet, about as many as subscribe to satellite radio today. The dot-com boom had already begun on Wall Street– Netscape went public in 1995 — but what’s striking about the old Web is how unsure everyone seemed to be about what the new medium was for. Small innovations drove us wild: Look at those animated dancing cats! Hey, you can get the weather right from your computer! In an article ranking the best sites of ’96, Time gushed that Amazon.com let you search for books “by author, subject or title” and “read reviews written by other Amazon readers and even write your own.” Whoopee. The very fact that Time had to publish a list of top sites suggests lots of people were mystified by the Web. What was this place? What should you do here? Time recommended that in addition to buying books from Amazon, “cybernauts” should read Salon, search for recipes on Epicurious, visit the Library of Congress, and play the Kevin Bacon game.

God, do you remember those days?  I sure do.  I penned a piece last month about the amazing technological progress we have witnessed over the past decade.

Meanwhile, we have a whole town full of clowns here in DC looking to regulate the Internet and digital technology for one reason or another.  All these would-be regulators need to step back and appreciate just how well markets have been working and why regulation would be a disaster for technological progress. Viva la (Technology) Revolution!

This is the third in a series of articles about Internet technologies. The first article was about web cookies. The second article explained the network neutrality debate. This article explains network management systems. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

There has been lots of talk on blogs recently about Cox Communications’ network management trial. Some see this as another nail in Network Neutrality’s coffin, while many users are just hoping for anything that will make their network connection faster.

As I explained previously, the Network Neutrality debate is best understood as a debate about how to best manage traffic on the Internet.

Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces. But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occasional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized. The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized.

ArnoldThis week, the Ninth Circuit Court of Appeals struck down a California video game statute as unconstitutional, holding that it violated both the First and Fourteenth Amendments to the federal Constitution.  The California law, which passed in October 2005 (A.B.1179), would have blocked the sale of “violent” video games to those under 18 and required labels on all games. Offending retailers could have been fined for failure to comply with the law.  It was immediately challenged by the Video Software Dealers Association and the Entertainment Software Association and, in August of 2007, a district court decision in the case of Video Software Dealers Association v. Schwarzenegger [decision here] enforced a permanent injunction against the law. The Ninth Circuit heard the state’s challenge to the injunction last year and handed down it’s decision this week [decision here] holding the statute unconstitutional. The key passage:

We hold that the Act, as a presumptively invalid content based restriction on speech, is subject to strict scrutiny and not the “variable obscenity” standard from Ginsberg v. New York , 390 U.S. 629 (1968). Applying strict scrutiny, we  hold that the Act violates rights protected by the First Amendment because the State has not demonstrated a compelling interest, has not tailored the restriction to its alleged compelling interest, and there exist less-restrictive means that would further the State’s expressed interests. Additionally, we hold that the Act’s labeling requirement is unconstitutionally compelled speech under the First Amendment because it does not require the disclosure of purely factual information; but compels the carrying of the State’s controversial opinion. Accordingly, we affirm the district court’s grant of summary judgment to Plaintiffs and its denial of the State’s cross-motion. Because we affirm the district court on these grounds, we do not reach two of Plaintiffs’ challenges to the Act: first, that the language of the Act is unconstitutionally vague, and, second, that the Act violates Plaintiffs’ rights under the Equal Protection Clause of the Fourteenth Amendment.

Continue reading →

ICANN has just released a second draft of its Applicant Guidebook, which would guide the creation of new generic topmore generic top-level domains (gTLDs) such as .BLOG, .NYC or .BMW. As ICANN itself declared (PDF), “New gTLDs will bring about the biggest change in the Internet since its inception nearly 40 years ago.”  PFF Adjunct Fellow Michael Palage and former ICANN Board member addressed the key problems with ICANN’s original proposal in his  paper ICANN’s “Go/ No-Go” Decision Concerning New gTLDs (PDF & embedded below), released earlier this week.

ICANN deserves credit for its detailed analysis of the many comments on the original draft which Mike summarized back in December.  ICANN also deserved credit for addressing two strong concerns of the global Internet community in response to the first draft:

  • ICANN has removed its proposed 5% global domain name tax on all registry services, something Mike explains in greater detail in his “Go/No-Go” paper.
  • ICANN has commissioned a badly-needed economic study on the dynamics of the domain name system “in broad.” But such a study must address how the fees ICANN collects from specific user communities relate to the actual costs of the services ICANN provides. The study should also consider why gTLDs should continue to provide such a disproportionate percentage of ICANN’s funding—currently 90%—given increasing competition between gTLDs and ccTLDs (e.g., the increasing use of .CN in China instead of .COM).

These concerns are part of a broader debate:  Will ICANN abide by its mandate to justify its fees based on recovering the costs of services associated with those fees, or will ICANN be free to continue “leveraging its monopoly over an essential facility of the Internet ( i.e., recommending additions to the Internet’s Root A Server) to charge whatever fees it wants?”  If, as Mike has discussed, ICANN walks away from its existing contractual relationship with the Department of Commerce and claims “fee simple absolute” ownership of the domain name system, who will enforce such a cost-recovery mandate?  

But ICANN simply “kicked the can down the road on the biggest concern”: how to minimize abusive domain name registrations ( e.g., cybersquatting, typosquatting, phishing, etc.) and reduce their impact on consumers. Continue reading →