I published an opinion piece today for CNET arguing against recent calls to reclassify broadband Internet as a “telecommunications service” under Title II of the Communications Act.

The push to do so comes as supporters of the FCC’s proposed Net Neutrality rules fear that the agency’s authority to adopt them under its so-called “ancillary jurisdiction” won’t fly in the courts.  In January, the U.S. Court of Appeals for the D.C. Circuit heard arguments in Comcast’s appeal of sanctions levied against the cable company for violations of the neutrality principles (not yet adopted under a formal rulemaking).  The three-judge panel expressed considerable doubt about the FCC’s jurisdiction in issuing the sanctions during oral arguments.  Only the published opinion (forthcoming) will matter, of course, but anxiety is growing.

Solving the Net Neutrality jurisdiction problem with a return to Title II regulation is a staggeringly bad idea, and a counter-productive one at that.  My article describes the parallel developments in “telecommunications services” and the largely unregulated “information services” (aka Title I) since the 1996 Communications Act, making the point that life for consumers has been far more exciting—and has generated far more wealth–under the latter than the former.

Under Title I, in short, we’ve had the Internet revolution.  Under Title II, we’ve had the decline and fall of basic wireline phone service, boom and bust in the arbitraging competitive local exchange market, massive fraud in the bloated e-Rate program, and the continued corruption of local licensing authorities holding applications hostage for legal and illegal bribes.

Continue reading →

Read my take at Cato@Liberty.

Can we steer people toward hard news — and get them to financially support it — through the use  of “news vouchers” or “public interest vouchers”? That’s the subject of this latest installment in my ongoing series on proposals to have the government play a greater role in the media sector in the name of sustaining struggling enterprises or “saving journalism.”

As I mentioned here previously, last week I testified at the FCC’s first “Future of Media” workshop on “Serving the Public Interest in the Digital Era.” (@3:29 mark of video).  It was a great pleasure to testify alongside the all-star cast there that day, which included the always-provocative Jeff Jarvis of the CUNY Graduate School of Journalism.  He delivered some very entertaining remarks and vociferously pushed back against many of the ideas that others were suggesting about “saving journalism.” Jeff is a very optimistic guy–far more optimistic than me, in fact–about the prospect that new media and citizen journalism will help fill whatever void is left by the death of many traditional media operators and institutions. He had a lively exchange with Srinandan Kasi, Vice President, General Counsel and Secretary of the Associated Press, that is worth watching (somewhere after the 5-hour mark on the video).

Nonetheless, Jarvis is a enough of a realist to know that it has always been difficult to find resources to fund hard news, which he creatively refers to as “broccoli journalism.”  This is what is keeping the FCC, the FTC (workshop today), and many media worrywarts up at night; the fear that as traditional financing mechanisms falter (advertising, classifieds, subscription revenues, etc) many traditional news-gathering efforts and institutions will disappear. Of course, while it is certainly true we are in the midst of a gut-wrenching media revolution with a great deal of creative destruction taking place, it is equally true that exciting new media business models and opportunities are developing. We shouldn’t over look that, as I argued here and here.

Anyway, a lot of different proposals are being put forth by scholars and policymakers to find new ways to finance news-gathering or “save journalism.” One of the ideas that has been gaining some steam as of late is the idea of crafting a “public interest voucher” or what Robert W. McChesney & John Nichols, authors of the new book The Death and Life of American Journalism, call a “Citizenship News Voucher.”  And McChesney discussed this idea in more detail when he spoke at today’s FTC event on saving journalism. Continue reading →

An Associated Press story this morning by Eileen AJ Connelly provides our latest example of Regulatory Whak-A-Mole, known to scholars as “term substitution.” 

Bank of America announced that it will discontinue charging overdraft fees on debit cards. This comes in response to new regulations that prohibit banks from charging overdraft fees unless the consumer has consented to the fee.  Since the bank has no way of getting your consent when you walk into Starbucks and perpetrate an overdraft while buying your latte macho grande and muffin, it simply won’t let the transaction go through.

Wa-Hoo, another victory for consumers. Well, not quite. Customers who place a high value on not being embarrassed in Starbucks are arguably worse off. (How do you return a latte macho grande if you find out you don’t have enough money to pay for it after your coffee concierge has mixed it?) More seriously, customers who might want to use an overdraft for a more substantial purchase will no longer have this option.

I wonder about the argument that regulators are saving hapless, uninformed consumers. The AP article reveals that 93 percent of overdraft fees are generated by 14 percent of customers — “serial overdrafters.” That means there are a lot of folks out there who repeatedly try to use their debit cards as a source of credit, albeit an expensive one. I don’t know about you, but it would only take one or two overdraft fees before I’d realize it’s cheaper to keep a $25 balance in my account than to pay more than that in multiple overdraft fees. If most overdrafters have done this more than once, they must know they will be charged a fee and have decided that’s the lesser of multiple evils. So why take this choice away from them?

Point-of-sale overdrafts may not be the only casualty of this regulation. The article quotes banking analyst Robert Meara’s prediction that banks might curtail free checking, which many apparently offer as a loss leader to generate fee income. A smaller stream of fee income makes “free checking” less attractive for banks.

Which consumers does this ultimately hurt? I can think of one group: people with low incomes who can’t afford checking account fees and  use debit cards responsibly.  

Somehow I doubt that was the regulators’ intention.

So reports the Wall Street Journal:

Lawmakers working to craft a new comprehensive immigration bill have settled on a way to prevent employers from hiring illegal immigrants: a national biometric identification card all American workers would eventually be required to obtain.

It’s the natural evolution of the policy called “internal enforcement” of immigration law, as I wrote in my Cato Institute paper, “Franz Kafka’s Solution to Illegal Immigration.”

Once in place, watch for this national ID to regulate access to financial services, housing, medical care and prescriptions—and, of course, serve as an internal passport.

Should ISPs be barred under net neutrality from discriminating against illegal content? Not according to the FCC’s draft net neutrality rule, which defines efforts by ISPs to curb the “transfer of unlawful content” as reasonable network management. This exemption is meant to ensure providers have the freedom to filter or block unlawful content like malicious traffic, obscene files, and copyright-infringing data.

EFF and Public Knowledge (PK), both strong advocates of net neutrality, are not happy about the copyright infringement exemption. The groups have urged the FCC to reconsider what they describe as the “copyright loophole,” arguing that copyright filters amount to “poorly designed fishing nets.”

EFF’s and PK’s concerns about copyright filtering aren’t unreasonable. While filtering technology has come a long way over the last few years, it remains a fairly crude instrument for curbing piracy and suffers from false positives. That’s because it’s remarkably difficult to accurately distinguish between unauthorized copyrighted works and similar non-infringing files. And because filters generally flag unauthorized copies on an automated basis without human intervention, even when filters get it right, they often disrupt legal, non-infringing uses of copyrighted material like fair use.

Despite copyright filtering technology’s imperfections, however, outlawing it is the wrong approach. At its core, ISP copyright filtering represents a purely private, voluntary method of dealing with the great intellectual property challenge. This is exactly the sort of approach advocates of limited government should embrace. As Adam and Wayne argued back in 2001:

To lessen the reliance on traditional copyright protections, policymakers should ensure that government regulations don’t stand in the way of private efforts to protect intellectual property.

Continue reading →

The Treasury Department today announced that it would grant the State Department’s December request (see the Iran letter here) for a waiver from U.S. embargoes that would allow Iranians, Sudanese and Cubanese to download “free mass market software … necessary for the exchange of personal communications and/or sharing of information over the internet such as instant messaging, chat and email, and social networking.”

I’m delighted to see that the Treasury Department is implementing Secretary Clinton’s pledge to make it easier for citizens of undemocratic regimes to use Internet communications tools like e-mail and social networking services offered by US companies (which Adam discussed here). It has been no small tragedy of mindless bureaucracy that our sanctions on these countries have actually hampered communications and collaboration by dissidents—without doing anything to punish oppressive regimes. So today’s announcement is a great victory for Internet freedom and will go a long way to bringing the kind of free expression we take for granted in America to countries like Iran, Sudan and Cuba.

But I’m at a loss to explain why the Treasury Department’s waiver is limited to free software. The U.S. has long objected when other countries privilege one model of software development over another—and rightly so: Government should remain neutral as between open-source and closed-source, and between free and paid models. This “techno-agnosticism” for government is a core principle of cyber-libertarianism: Let markets work out the right mix of these competing models through user choice!

Why should we allow dissidents to download free “Web 2.0” software but not paid ones? Not all mass-market tools dissidents would find useful are free. Many “freemium” apps, such as Twitter client software, require purchase to get full functionality, sometimes including privacy and security features that are especially useful for dissidents. To take a very small example that’s hugely important to me as a user, Twitter is really only useful on my Android mobile phone because I run the Twidroid client. But the free version doesn’t support multiple accounts or lists, which are essential functions for a serious Tweeter. The Pro version costs just $4.89—but if I lived in Iran, U.S. sanctions would prevent me from buying this software. More generally, we just don’t know what kind of innovative apps or services might be developed that would be useful to dissidents, so why foreclose the possibility of supporting them through very small purchases? Continue reading →

Just a heads up that on my weekly tech policy podcast, Surprisingly Free Conversations, we’ve just posted an interview with Ethan Zuckerman of Harvard’s Berkman Center for Internet & Society. He recently published an excellent blog post on the limits to internet censorship circumvention technologies, and that’s the topic of our discussion. Ethan writes,

So here’s a provocation: We can’t circumvent our way around internet censorship.

I don’t mean that internet censorship circumvention systems don’t work. They do – our research tested several popular circumvention tools in censored nations and discovered that most can retrieve blocked content from behind the Chinese firewall or a similar system. (There are problems with privacy, data leakage, the rendering of certain types of content, and particularly with usability and performance, but the systems can circumvent censorship.) What I mean is this – we couldn’t afford to scale today’s existing circumvention tools to “liberate” all of China’s internet users even if they all wanted to be liberated.

You can listed to this episode here, and you can subscribe to the show on iTunes or RSS.

The way Ben Kunz puts it in a new Business Week article, “Each device contains its own widening universe of services and applications, many delivered via the Internet. They are designed to keep you wedded to a particular company’s ecosystem and set of products.”

I like Ben’s article a lot because it recognizes that “walling off” and a “widening universe” are not mutually exclusive. If only policymakers and regulators acknowledged that. They must know it, but admitting it means acknowledging their limited relevance to consumer well-being and a need to step aside. So they feign ignorance.

Many claim to worry about the rise of proprietary services (I, as you can probably tell, often doubt their sincerity) but I’ve always regarded a “Splinternet” as a good thing that means more, not less, communications wealth. I first wrote about this in Forbes in 2000 when everyone was fighting over spam, privacy, content regulation, porn and marketing to kids.

Increasing wealth means a copy-and-paste world for content across networks, and it means businesses will benefit from presence across many of tomorrow’s networks, generating more value for future generations of consumers and investors. We won’t likely talk of an “Internet” with a capital-“I” and a reverent tremble the way we do now, because what matters is not the Internet as it happens to look right now, but underlying Internet technology that can just as easily erupt everywhere else, too.

Meanwhile, new application, device and content competition within and across networks disciplines the market process and “regulates” things far better than the FCC can. Yet the FCC’s very function is to administer or artificially direct proprietary business models, which it must continue to attempt to do (and as it pleads for assistance in doing in the net neutrality rulemaking) if it is going to remain relevant. I described the urgency of stopping the agency’s campaign recently in “Splinternets and cyberspaces vs. net neutrality,” and also in the January 2010 comments to the FCC on net neutrality.

Continue reading →

We’re from government and we’re here to help save journalism.”

That seems to be the hot new meme in media policy circles these days. Last week, it was the Federal Communications Commission (FCC) kicking off their “Future of Media” effort with a workshop on “Serving the Public Interest in the Digital Era.” This week, it’s the Federal Trade Commission’s (FTC) turn as they host the second in their series of workshops on How Will Journalism Survive the Internet Age? Meanwhile, the Senate has already held hearings about “the future of journalism,” and Senator Benjamin L. Cardin (D-MD) recently introduced the “Newspaper Revitalization Act,” which would allow newspapers to become nonprofit organizations in an effort to help them stay afloat.

I have no doubt that many of the public policymakers behind these efforts have the best of intentions and really are concerned about what many believe to be a crisis in the field of journalism. But here are my three primary concerns with Washington’s sudden interest in “saving journalism”: Continue reading →