March 2010

So reports the Wall Street Journal:

Lawmakers working to craft a new comprehensive immigration bill have settled on a way to prevent employers from hiring illegal immigrants: a national biometric identification card all American workers would eventually be required to obtain.

It’s the natural evolution of the policy called “internal enforcement” of immigration law, as I wrote in my Cato Institute paper, “Franz Kafka’s Solution to Illegal Immigration.”

Once in place, watch for this national ID to regulate access to financial services, housing, medical care and prescriptions—and, of course, serve as an internal passport.

Should ISPs be barred under net neutrality from discriminating against illegal content? Not according to the FCC’s draft net neutrality rule, which defines efforts by ISPs to curb the “transfer of unlawful content” as reasonable network management. This exemption is meant to ensure providers have the freedom to filter or block unlawful content like malicious traffic, obscene files, and copyright-infringing data.

EFF and Public Knowledge (PK), both strong advocates of net neutrality, are not happy about the copyright infringement exemption. The groups have urged the FCC to reconsider what they describe as the “copyright loophole,” arguing that copyright filters amount to “poorly designed fishing nets.”

EFF’s and PK’s concerns about copyright filtering aren’t unreasonable. While filtering technology has come a long way over the last few years, it remains a fairly crude instrument for curbing piracy and suffers from false positives. That’s because it’s remarkably difficult to accurately distinguish between unauthorized copyrighted works and similar non-infringing files. And because filters generally flag unauthorized copies on an automated basis without human intervention, even when filters get it right, they often disrupt legal, non-infringing uses of copyrighted material like fair use.

Despite copyright filtering technology’s imperfections, however, outlawing it is the wrong approach. At its core, ISP copyright filtering represents a purely private, voluntary method of dealing with the great intellectual property challenge. This is exactly the sort of approach advocates of limited government should embrace. As Adam and Wayne argued back in 2001:

To lessen the reliance on traditional copyright protections, policymakers should ensure that government regulations don’t stand in the way of private efforts to protect intellectual property.

Continue reading →

The Treasury Department today announced that it would grant the State Department’s December request (see the Iran letter here) for a waiver from U.S. embargoes that would allow Iranians, Sudanese and Cubanese to download “free mass market software … necessary for the exchange of personal communications and/or sharing of information over the internet such as instant messaging, chat and email, and social networking.”

I’m delighted to see that the Treasury Department is implementing Secretary Clinton’s pledge to make it easier for citizens of undemocratic regimes to use Internet communications tools like e-mail and social networking services offered by US companies (which Adam discussed here). It has been no small tragedy of mindless bureaucracy that our sanctions on these countries have actually hampered communications and collaboration by dissidents—without doing anything to punish oppressive regimes. So today’s announcement is a great victory for Internet freedom and will go a long way to bringing the kind of free expression we take for granted in America to countries like Iran, Sudan and Cuba.

But I’m at a loss to explain why the Treasury Department’s waiver is limited to free software. The U.S. has long objected when other countries privilege one model of software development over another—and rightly so: Government should remain neutral as between open-source and closed-source, and between free and paid models. This “techno-agnosticism” for government is a core principle of cyber-libertarianism: Let markets work out the right mix of these competing models through user choice!

Why should we allow dissidents to download free “Web 2.0” software but not paid ones? Not all mass-market tools dissidents would find useful are free. Many “freemium” apps, such as Twitter client software, require purchase to get full functionality, sometimes including privacy and security features that are especially useful for dissidents. To take a very small example that’s hugely important to me as a user, Twitter is really only useful on my Android mobile phone because I run the Twidroid client. But the free version doesn’t support multiple accounts or lists, which are essential functions for a serious Tweeter. The Pro version costs just $4.89—but if I lived in Iran, U.S. sanctions would prevent me from buying this software. More generally, we just don’t know what kind of innovative apps or services might be developed that would be useful to dissidents, so why foreclose the possibility of supporting them through very small purchases? Continue reading →

Just a heads up that on my weekly tech policy podcast, Surprisingly Free Conversations, we’ve just posted an interview with Ethan Zuckerman of Harvard’s Berkman Center for Internet & Society. He recently published an excellent blog post on the limits to internet censorship circumvention technologies, and that’s the topic of our discussion. Ethan writes,

So here’s a provocation: We can’t circumvent our way around internet censorship.

I don’t mean that internet censorship circumvention systems don’t work. They do – our research tested several popular circumvention tools in censored nations and discovered that most can retrieve blocked content from behind the Chinese firewall or a similar system. (There are problems with privacy, data leakage, the rendering of certain types of content, and particularly with usability and performance, but the systems can circumvent censorship.) What I mean is this – we couldn’t afford to scale today’s existing circumvention tools to “liberate” all of China’s internet users even if they all wanted to be liberated.

You can listed to this episode here, and you can subscribe to the show on iTunes or RSS.

The way Ben Kunz puts it in a new Business Week article, “Each device contains its own widening universe of services and applications, many delivered via the Internet. They are designed to keep you wedded to a particular company’s ecosystem and set of products.”

I like Ben’s article a lot because it recognizes that “walling off” and a “widening universe” are not mutually exclusive. If only policymakers and regulators acknowledged that. They must know it, but admitting it means acknowledging their limited relevance to consumer well-being and a need to step aside. So they feign ignorance.

Many claim to worry about the rise of proprietary services (I, as you can probably tell, often doubt their sincerity) but I’ve always regarded a “Splinternet” as a good thing that means more, not less, communications wealth. I first wrote about this in Forbes in 2000 when everyone was fighting over spam, privacy, content regulation, porn and marketing to kids.

Increasing wealth means a copy-and-paste world for content across networks, and it means businesses will benefit from presence across many of tomorrow’s networks, generating more value for future generations of consumers and investors. We won’t likely talk of an “Internet” with a capital-“I” and a reverent tremble the way we do now, because what matters is not the Internet as it happens to look right now, but underlying Internet technology that can just as easily erupt everywhere else, too.

Meanwhile, new application, device and content competition within and across networks disciplines the market process and “regulates” things far better than the FCC can. Yet the FCC’s very function is to administer or artificially direct proprietary business models, which it must continue to attempt to do (and as it pleads for assistance in doing in the net neutrality rulemaking) if it is going to remain relevant. I described the urgency of stopping the agency’s campaign recently in “Splinternets and cyberspaces vs. net neutrality,” and also in the January 2010 comments to the FCC on net neutrality.

Continue reading →

We’re from government and we’re here to help save journalism.”

That seems to be the hot new meme in media policy circles these days. Last week, it was the Federal Communications Commission (FCC) kicking off their “Future of Media” effort with a workshop on “Serving the Public Interest in the Digital Era.” This week, it’s the Federal Trade Commission’s (FTC) turn as they host the second in their series of workshops on How Will Journalism Survive the Internet Age? Meanwhile, the Senate has already held hearings about “the future of journalism,” and Senator Benjamin L. Cardin (D-MD) recently introduced the “Newspaper Revitalization Act,” which would allow newspapers to become nonprofit organizations in an effort to help them stay afloat.

I have no doubt that many of the public policymakers behind these efforts have the best of intentions and really are concerned about what many believe to be a crisis in the field of journalism. But here are my three primary concerns with Washington’s sudden interest in “saving journalism”: Continue reading →

by Adam Thierer & Berin Szoka

We’re hoping that the Government Accountability Office (GAO) has made some sort of mistake, because it’s hard to believe its latest findings about the paperwork burden generated by Federal Communications Commission (FCC) regulatory activity. In late January, the GAO released a report on “Information Collection and Management at the Federal Communications Commission” (GAO-10-249), which examined information collection, management, and reporting practices at the FCC. The GAO noted that the FCC gathers information through 413 collection instruments, which include things like: (1) required company filings, such as the ownership of television stations; (2) applications for FCC licenses; (3) consumer complaints; (4) company financial and accounting performance; and (5) a variety of other issues, such as an annual survey of cable operators.  (Note: This does not include filings and responses done pursuant to other FCC NOIs or NPRMs.)

Regardless, the FCC told the GAO that it receives nearly 385 million responses with an estimated 57 million burden hours associated with the 413 collection instruments. A “burden hour” is defined under the Paperwork Reduction Act as “the time, effort, or financial resources expended by persons to generate, maintain, or provide information to a federal agency.” And the FCC is generating 57 million of ‘em! Even though we are frequently critical of the agency, these numbers are still hard to fathom. Perhaps the GAO has made some sort of mistake here. But here’s what really concerns us if they haven’t made a mistake. Continue reading →

PFF is Hiring!

by on March 5, 2010 · 2 comments

Sorry to use the blog as a job board, but I wanted to let readers know that the Progress & Freedom Foundation (PFF) has a couple of positions we’d like to find good people to fill:

  • Senior Economist: PFF is looking for a skilled economist (PhD-level preferred) with experience in the high-tech arena or network-related industries. Our senior economist would be responsible for assisting other PFF analysts on various projects and priorities, but would also be free to pursue other objectives.
  • Vice President, Development & Outreach: PFF is looking for development director to oversee outreach to supporters and other third parties, and to help us grow the organization.
  • President: Yes, you read that right! After less than 6 months on the job, I’m already tired of management and want to get back to full-time policy wonkery! If you know of someone who would make a great leader, has strong free-market credentials, and extensive experience in the field of high-tech policy and media/communications law, please let me know. I’m quite ready and willing to hand over the keys to someone else so I can spend all my time fighting the good fight to defend free minds, free markets, and free speech!

To apply, please send a resume and cover letter to Adam Thierer (athierer@pff.org). Or, if you have any ideas on good candidates, please let me know that, too.

A couple weeks ago the Google Books Settlement fairness hearing took place in New York City, where Judge Denny Chin heard dozens of oral arguments discussing the settlement’s implications for competition, copyright law, and privacy. The settlement raises a number of very challenging legal questions, and Judge Chin’s decision, expected to come down later this spring, is sure to be a page-turner no matter how he rules.

My work on the Google Books Settlement has focused on reader privacy concerns, which have been a major point of contention between Google and civil liberties groups like EFF, ACLU, and CDT. While I agree with these groups that existing legal protections for sensitive user information stored by cloud computing providers are inadequate, I do not believe that reader privacy should factor into the court’s decision on whether to approve or reject the settlement.

I elaborated on reader privacy in an amicus curiae brief I submitted to the court last September. I argued that because Google Books will likely earn a sizable portion of its revenues from advertising, placing strict limits on data collection (as EFF and others have advocated) would undercut Google’s incentive to scan books, ultimately hurting the very authors whom the settlement is supposed to benefit. While the settlement is not free from privacy risks, such concerns aren’t unique to Google Books nor are they any more serious than the risks surrounding popular Web services like Google search and Gmail. Comparing Google Book Search to brick-and-mortar libraries is inapt, and like all cloud computing providers, Google has a strong incentive to safeguard user data and use it only in ways that benefit users and advertisers.

Continue reading →

Yesterday, NetChoice joined the Center for Democracy & Technology and the Maine Civil Liberties Union (and PFF, who submitted written testimony) before the Maine legislature to oppose a bill that would restrict how health-related products can me marketed to minors under age 17.

The bill, LD 1677, is a repeal and replacement for current law passed last year that was strongly opposed by the online industry. As I previously blogged, NetChoice was a lead plaintiff in last year’s lawsuit to enjoin the law. Though well intentioned, this law was overly-broad and wrought with constitutional concerns. As a result, Attorney General Mills agreed not to enforce the statute. In October last year, NetChoice joined others in testifying before Maine Joint Standing Committee on the Judiciary regarding this law. In short, the conclusion of all parties involved was that the current legislation could not stand and that the legislature should move to quickly repeal.

So we all arrived in Augusta, ready for the next round – after all, this bill is #9 on the NetChoice iAWFUL list! But when we arrived, we were treated to a surprise amendment from the bill sponsor and this became the focus for discussion and testimony. Here’s the amended prohibition:

A person may not knowingly collect and use personal information collected on the Internet from a minor residing in this State for the purposes of pharmaceutical marketing prescription drugs to that minor, unless the minor specifically requests that information about the prescription drug be provided to them

John Morris at CDT gave great testimony and generally welcomed the amendment. However, he cautioned the committee that it should make sure that website intermediaries would not have liability for merely displaying ads. Continue reading →