August 2010

Are you a tech policy geek who just can’t get enough Internet policy & cyberlaw books in your life?  Alternatively, would you just like to hear two such geeks talk about some of the most important tech policy books out there so you don’t have to read them yourself?!

Either way, you might want to join TLF-alum Tim Lee and me for a book chat over at his blog on Wednesday night at 9:00 pm EST.  Tim is experimenting with a new tool that his brother has developed called Envolve, which allows real-time user chat within a website or blog. Pretty cool tool, although I hope my increasingly arthritic fingers don’t fail me while I am trying to post rapid-fire responses to Tim or other participants!  [Seriously, I am 41 and my fingers already feel like rusty hinges. Sucks.]

Anyway, if you are interested, join us for the chat and let us know what you think.  I’ll be discussing some of my early picks for most important info-tech policy book of 2010 and relating them them to previous choices from 2008 and 2009.  I’ll also be placing some of them along my Internet “optimist v. pessimist” spectrum.

As I always say, I read books so you don’t have to!  All my reviews are here and here’s my Shelfari bookshelf.

I’ve just published a long analysis for CNET of the proposed legislative framework presented yesterday by Google and Verizon.

The proposal has generated howls of anguish from the usual suspects (see quotes appearing in Cecilia Kang, “Silicon Valley criticizes Google-Verizon accord” in The Washington Post; Matthew Lasar’s “Google-Verizon NN Pact Riddled with Loopholes” on Ars Technica and Marguerite Reardon’s “Net neutrality crusaders slam Verizon, Google” at CNET for a sampling of the vitriol).

But after going through the framework and comparing it more-or-less line for line with what the FCC proposed back in October, I found there were very few significant differences.  Surprisingly, much of the outrage being unleashed against the framework relates to provisions and features that are identical to the FCC’s Notice of Proposed Rulemaking (NPRM), which of course many of those yelling the loudest ardently support.

Continue reading →

In a 3-2 vote, the Federal Communications Commission recently decided to jack up its official definition of “broadband” from 200 kbps download to the 4 mbps dpwnload/1 mbps upload used as a benchmark in Our Big Fat National Broadband Plan. The three commissioners in the majority also declared that the definition of broadband will continue to evolve as consumers purchase faster connections to utilize new applications.

Several months earlier, the FCC launched a proceeding to figure out how to convert universal service subsidies for rural telephone service into universal service subsidies for rural broadband service.  Put these two decisions together, and it looks like the majority on the FCC is hell-bent on establishing rural broadband subsidies as a perpetual entitlement program that will never “solve” the rural availability problem because the goalposts will keep moving.

The current USF program taxes price-sensitive services (long distance and wireless) to subsidize a service that is not very price sensitive (local phone connections).  If the FCC takes a further step on the funding side and starts collecting universal service assessments from broadband, it will diminish broadband subscribership by taxing a service that is even more price sensitive: broadband connections. (I explained this a few months ago here.)

It’s time to get off this merry-go-round. The solution was suggested by MIT economist Jerry Hausman back when the FCC first started creating the current universal service programs in response to the Telecom Act of 1996: use revenues from spectrum auctions. 

Instead of having the FCC perpetually collect assessments from broadband or telephone services to subsidize broadband buildout in rural areas, Congress should earmark revenues from the next spectrum auction for one-time buildout grants in high-cost areas. The grants should be awarded via a competitive procurement auction that would force subsidy-seekers in different locations to compete with each other for the federal dollars. And Congress should explicitly wind down the universal service telephone subsidies in high cost areas and prohibit the FCC from using universal service assessments to fund broadband deployment in these places.

Using revenues from spectrum auctions would avoid the distortions and perverse consequences caused by ongoing universal service assessments on broadband or telephone services. One-shot deployment grants would ensure that the availability problem gets solved, so the federal government can declare victory and get out of the perpetual subsidy business.

Of course, some locations in the US are so expensive to serve that the potential revenues might not even cover the operating costs of broadband. But it does not follow that operators in these places need an ongoing stream of subsidies. When preparing their subsidy bids, they will have to calculate how large the one-shot payment needs to be to induce them to take on the capital costs and the ongoing operating costs. In other words, they can bank some of the one-shot subsidy and use it to cover the difference between revenues and operating costs.

This modest proposal does not address all aspects of the universal service fund. But it would achieve a clear objective — bringing broadband to rural areas — while allowing the FCC to extricate itself from the business of distributing $4.6 billion a year in subsidies. Let’s see a timetable for withdrawal!

This week on the podcast, Birgitta Jónsdóttir, Member of the Icelandic Parliament for the Movement party, and one of the chief sponsors of the Icelandic Modern Media Initiative, discusses the initiative.  She explains how it was crafted, who it would protect and how, and Wikileaks’ influence on it.  Jónsdóttir specifically discusses the proposal’s impact on journalists, sources, whistleblowers, libel tourism, superinjunctions, freedom of information, prior restraint, and government transparency.  She also talks about the inspiration behind the initiative, which stems partly from her background as a writer and activist, and her path to the Icelandic Parliament.

Related Readings

Do check out the interview, and consider subscribing to the show on iTunes. Past guests have included Clay Shirky on cognitive surplus, Nick Carr on what the internet is doing to our brains, Gina Trapani and Anil Dash on crowdsourcing, James Grimmelman on online harassment and the Google Books case, Michael Geist on ACTA, Tom Hazlett on spectrum reform, and Tyler Cowen on just about everything.

So what are you waiting for? Subscribe!

Reading the 2002 edited volume, From 0 to 1: An Authoritative History of Modern Computing, I came across an interesting history of the first software patent—a business history, as opposed to a legal history. I hadn’t seen this anywhere before, so I’ll recount it here.

Luanne Johnson, president (now co-chair) of the Software History Center, tells the story of Martin A. Goetz at Applied Data Research (ADR), a Princeton, New Jersey company founded in 1959 to sell computer programming services.

In 1964, computer manufacturer RCA approached ADR about writing a flowcharting program that RCA would provide to users of its RCA 501 computer at no cost. ADR designed and wrote the program, AUTOFLOW, and offered it to RCA for $25,000. But RCA didn’t want it at that price. Marty Goetz then went to work on a different approach to recouping the $10,000 his company had laid out to write AUTOFLOW.

There were only hundreds of companies using the RCA 501, to whom he might have sold directly. So, seeing a larger market among users of the IBM 1401, Goetz and his colleagues re-wrote AUTOFLOW for that computer. They ultimately produced superior flowcharting software to what IBM offered its customers. AUTOFLOW was capable of flowcharting the logical sequence of existing software, easing the design of software to compliment what was already in use on IBM machines. Writes Johnson: Continue reading →

The nice folks at the New York Times “Room for Debate” feature asked me and a group of bright lights to discuss the Verizon-Google agreement on network neutrality regulation, as it stood at various points in the day.

Read the comments of Tim Wu, Lawrence Lessig, David Gelernter, Ed Felten, Jonathan Zittrain, and myself. Much of my comment owes credit to Tim Lee’s excellent paper “The Durable Internet.”

We’re all over the place, folks . . .

Update: Late addition: Gigi Sohn.

Give up?

Both have adopted highly unconventional names in their lifetimes. In Prince’s case, it was the adoption of a symbol to protest Warner Brothers’ artistic and financial control of his output.

Following suit, H.R. 1586 has adopted the name, the “______Act of____,” apparently because of the haste with which the Senate wanted to pass the bill last week.

The Senate’s substitute amendment on this $26 billion spending bill had a placeholder bill name, and it could not take time to replace the placeholder. The House is expected to return this week and pass the Senate amendment, sending it to the president.

As reported on the WashingtonWatch.com blog and cnet news, this highly unconventional name may be what goes into law. With the Senate out of town until September, there is no chance to pass a correcting amendment in both houses. The constitution requires both to pass identical bills, so the House must take up the “______Act of____” and pass it as such.

If it does, the “law with no name” will stand as a lasting tribute to the inattention Congress gives its work. Spending billions of taxpayer dollars is a hurried and casual affair for our lawmakers.

There are few things I find more annoying in the Net neutrality wars than the silly assertion by groups like Free Press and other regulatory radicals that “Net neutrality is the Internet’s First Amendment.”  It’s utter rubbish as I have documented here many times before.  But now Sen. Al Franken is running around sputtering such nonsense, as he did in this recent CNN.com editorial, claiming that “Net neutrality is foremost free speech issue of our time.”   The folks at CNN invited me to response and below you will find the piece PFF press director Mike Wendy and I submitted.

____________

Big Government the Real Threat to Internet

by Adam Thierer & Mike Wendy

In his recent CNN.com opinion piece, “Net neutrality is foremost free speech issue of our time,” Sen. Al Franken claims that “our free speech rights are under assault — not from the government but from corporations seeking to control the flow of information in America.”

He alludes to potential corporate blocking of online products and speech and says, “If that scares you as much as it scares me, then you need to care about net neutrality.”

Chicken Little, call your office!

Such sky-is-falling scare tactics are all too common in the heated debate over net neutrality regulation, but actual evidence of such nefarious corporate scheming is nowhere to be found. Perhaps that’s why Franken resorts to such tall tales.

Moreover, his reading of the First Amendment is at odds with the one most of us learned about in civics class (“Congress shall make no law…”). His would empower regulators by converting the First Amendment from a shield against government action into a sword that bureaucrats could wield against private industry. Continue reading →

I have a piece on Internet privacy in the Wall Street Journal today. It’s one side of a “debate” on Internet privacy and tracking. I say be careful what you give up if you thwart online tracking—personalization, free content, and other goodies may go by the wayside.

My “opponent” is Nicholas Carr, whose identity and arguments I didn’t know as I wrote, nor likely did he mine. His is a good piece that lays out the many legitimate concerns with online tracking. Must be nice to be the maximal-privacy “good guy”!

For the sake of making it interesting I’ll pick out one important point that highlights the nub of the issue.

Privacy tradeoffs have always been a part of life, Carr says, “But now, thanks to the Net, we’re losing our ability to understand and control those tradeoffs—to choose, consciously and with awareness of the consequences, what information about ourselves we disclose and what we don’t.”

This sentence brought back to me a memorable moment from law school. In a seminar course, the professor called upon a fellow student who rather dopily apologized, “Sorry, I didn’t have time to do the reading.”

“In fact you did have time to do the reading,” replied the teacher, “but you just didn’t take it. Isn’t that correct?”

It was funny, if embarrassing for my colleague, and a great illustration of precision with language.

Holding to that standard of precision, I’ll disagree with Carr’s statement: The Net is not affecting our ability to understand and control privacy tradeoffs. Its development has outstripped that capacity. Developing consumers’ understanding of information flows, information uses, and consequences will position them to restore privacy.

I don’t think Carr would disagree with that sentiment in the main. Later he says, agreeably to me, “We need to take personal responsibility for the information we share whenever we log on.”

And I do think that’s the heart of the problem: “Education is the hard way, and it is the only way, to get consumers’ privacy interests balanced with their other interests.”

While on vacation last week, I finished up a few new cyber-policy books and one of them was  Cyber War: The Next Threat to National Security and What to Do About It by Richard A. Clarke and Robert K. Knake.  The two men certainly possess the right qualifications for a review of the subject.  Clarke was National Coordinator for Security, Infrastructure Protection, and Counterterrorism during the Clinton years and also served in the Reagan and two Bush administrations. Knake is an international affairs fellow at the Council on Foreign Relations where he specializes in cybersecurity.

Clarke and Knake’s book is important if for no other reason than, as they note, “there are few books on cyber war.” (p. 261) Thus, their treatment of the issue will likely remain the most relevant text in the field for some time to come.

They define cyber war as “actions by a nation-state to penetrate another nation’s computers or networks for the purposes of causing damage or disruption” (p. 6) and they argue that such actions are on the rise.  And they also claim that the U.S. has the most to lose if and when a major cyber war breaks out, since we are now so utterly dependent upon digital technologies and networks.

At their best, Clarke and Knake walk the reader through the mechanics of cyber war, who some of the key players and countries are who could engage in it, and identify what the costs of such of war would entail.  Other times, however, the book suffers from a somewhat hysterical tone, as the authors are out here not just to describe cyber war, but to also issue a clarion call for regulatory action to combat it.  Ryan Singel of Wired, for example, has taken issue with the book’s “doomsday scenario that stretches credulity” and claims that “Like most cyberwar pundits, Clarke puts a shine on his fear mongering by regurgitating long-ago debunked hacker horror stories.”  Bruce Schneier and Jim Harper have raised similar concerns elsewhere.

Continue reading →