Posts tagged as:

[I am currently helping Berin Szoka edit a collection of essays from various Internet policy scholars for a new PFF book called “The Next Digital Decade: Essays about the Internet’s Future.”  I plan on including two chapters of my own in the book responding to the two distinct flavors of Internet pessimism that I increasingly find are dominating discussions about Internet policy. Below you will see how the first of these two chapters begins. I welcome input as I refine this draft. ]

Surveying the prevailing mood surrounding cyberlaw and Internet policy circa 2010, one is struck by the overwhelming sense of pessimism about our long-term prospects for a better future.   “Internet pessimism,” however, comes in two very distinct flavors:

  1. Net Skeptics, Pessimistic about the Internet Improving the Lot of Mankind: The first variant of Internet pessimism is rooted in general skepticism regarding the supposed benefits of cyberspace, digital technologies, and information abundance. The proponents of this pessimistic view often wax nostalgic about some supposed “good ‘ol days” when life was much better (although they can’t seem to agree when those were). At a minimum, they want us to slow down and think twice about life in the Information Age and how it is personally affecting each of us.  Other times, however, their pessimism borders on neo-Ludditism, with proponents recommending steps be taken to curtail what they feel is the destructive impact of the Net or digital technologies on culture or the economy. Leading proponents of this variant of Internet pessimism include:  Neil Postman (Technopoly: The Surrender of Culture to Technology), Andrew Keen, (The Cult of the Amateur: How Today’s Internet is Killing our Culture), Lee Siegel, (Against the Machine: Being Human in the Age of the Electronic Mob), Mark Helprin, (Digital Barbarism) and, to a lesser degree, Jaron Lanier (You Are Not a Gadget) and Nicholas Carr (The Big Switch and The Shallows).
  2. Net Lovers, Pessimistic about the Future of Openness: A different type of Internet pessimism is on display in the work of many leading cyberlaw scholars today.  Noted academics such as Lawrence Lessig, (Code and Other Laws of Cyberspace), Jonathan Zittrain (The Future of the Internet & How to Stop It), and Tim Wu (The Master Switch The Rise and Fall of Information Empires), embrace the Internet and digital technologies, but argue that they are “dying” due to a lack of sufficient care or collective oversight.  In particular, they fear that the “open” Internet and “generative” digital systems are giving way to closed, proprietary systems, typically run by villainous corporations out to erect walled gardens and quash our digital liberties.  Thus, they are pessimistic about the long-term survival of the wondrous Internet that we currently know and love.

Despite their different concerns, two things unite these two schools of techno-pessimism.  Continue reading →

I’ve noted here before that Gordon Crovitz is my favorite technology policy columnist and that everything he pens for his “Information Age” column for The Wall Street Journal is well worth reading.   His latest might be his best ever.  It touches upon the great debate between Internet optimists and pessimists regarding the impact of digital technology on our culture and economy.  His title is just perfect: “Is Technology Good or Bad? Yes.”  His point is that you can find evidence that technological change has both beneficial and detrimental impacts, and plenty of people on both sides of the debate to cite it for you.

He specifically references the leading pessimist, Nicholas Carr, and optimist, Clay Shirky, of our time. In The Shallows: What the Internet is Doing to Our Brains and The Big Switch: Rewiring the World, From Edison to Google, Carr paints a dismal portrait of what the Internet is doing us and the world around us. Clay Shirky responds in books like Here Comes Everybody and Cognitive Surplus: Creativity and Generosity in the a Connected Age, arguing that we are much better off because of the rise of the Net and digital technology.

This is a subject I’ve spent a lot of time noodling over here through the years and, most recently, I compiled all my random thoughts into a mega-post asking, “ Are You an Internet Optimist or Pessimist?”  That post tracks all the leading texts on both sides of this debate.  I was tickled, therefore, when Gordon contacted me and asked for comment for his story after seeing my piece. [See, people really do still read blogs!]  Continue reading →

The Progress and Freedom Foundation has just published a white paper I wrote for them titled “The Seven Deadly Sins of Title II Reclassification (NOI Remix).”  This is an expanded and revised version of an earlier blog post that looks deeply into the FCC’s pending Notice of Inquiry regarding broadband Internet access. You can download a PDF here.

I point out that beyond the danger of subjecting broadband Internet to extensive new regulations under the so-called “Third Way” approach outlined by FCC Chairman Julius Genachowski, a number of other troubling features in the Notice indicate an even broader agenda for the agency with regard to the Internet. Continue reading →

The release of a joint policy framework from Google and Verizon this week touched off even more activity in the never-ending saga of Net Neutrality than the rumors about the possibility such an agreement was in the works did the week before.

Op-ed pages, business and technology news programs, and public radio’s precious moments were overrun with anxious talking heads denouncing or praising the latest developments, or even a few of us trying just to explain what was and was not actually being said and done.

That’s not how August is supposed to be in policyland, when Washington reverts to the swamp from which it came.  (John Adams left early one summer during his Presidency and refused to return long after the heat had broken.)  I had hoped at long last to get around to finalizing last year’s tax return or maybe fixing my perennially-broken irrigation system, but oh well. Continue reading →

I’ve just published a long analysis for CNET of the proposed legislative framework presented yesterday by Google and Verizon.

The proposal has generated howls of anguish from the usual suspects (see quotes appearing in Cecilia Kang, “Silicon Valley criticizes Google-Verizon accord” in The Washington Post; Matthew Lasar’s “Google-Verizon NN Pact Riddled with Loopholes” on Ars Technica and Marguerite Reardon’s “Net neutrality crusaders slam Verizon, Google” at CNET for a sampling of the vitriol).

But after going through the framework and comparing it more-or-less line for line with what the FCC proposed back in October, I found there were very few significant differences.  Surprisingly, much of the outrage being unleashed against the framework relates to provisions and features that are identical to the FCC’s Notice of Proposed Rulemaking (NPRM), which of course many of those yelling the loudest ardently support.

Continue reading →

In a 3-2 vote, the Federal Communications Commission recently decided to jack up its official definition of “broadband” from 200 kbps download to the 4 mbps dpwnload/1 mbps upload used as a benchmark in Our Big Fat National Broadband Plan. The three commissioners in the majority also declared that the definition of broadband will continue to evolve as consumers purchase faster connections to utilize new applications.

Several months earlier, the FCC launched a proceeding to figure out how to convert universal service subsidies for rural telephone service into universal service subsidies for rural broadband service.  Put these two decisions together, and it looks like the majority on the FCC is hell-bent on establishing rural broadband subsidies as a perpetual entitlement program that will never “solve” the rural availability problem because the goalposts will keep moving.

The current USF program taxes price-sensitive services (long distance and wireless) to subsidize a service that is not very price sensitive (local phone connections).  If the FCC takes a further step on the funding side and starts collecting universal service assessments from broadband, it will diminish broadband subscribership by taxing a service that is even more price sensitive: broadband connections. (I explained this a few months ago here.)

It’s time to get off this merry-go-round. The solution was suggested by MIT economist Jerry Hausman back when the FCC first started creating the current universal service programs in response to the Telecom Act of 1996: use revenues from spectrum auctions. 

Instead of having the FCC perpetually collect assessments from broadband or telephone services to subsidize broadband buildout in rural areas, Congress should earmark revenues from the next spectrum auction for one-time buildout grants in high-cost areas. The grants should be awarded via a competitive procurement auction that would force subsidy-seekers in different locations to compete with each other for the federal dollars. And Congress should explicitly wind down the universal service telephone subsidies in high cost areas and prohibit the FCC from using universal service assessments to fund broadband deployment in these places.

Using revenues from spectrum auctions would avoid the distortions and perverse consequences caused by ongoing universal service assessments on broadband or telephone services. One-shot deployment grants would ensure that the availability problem gets solved, so the federal government can declare victory and get out of the perpetual subsidy business.

Of course, some locations in the US are so expensive to serve that the potential revenues might not even cover the operating costs of broadband. But it does not follow that operators in these places need an ongoing stream of subsidies. When preparing their subsidy bids, they will have to calculate how large the one-shot payment needs to be to induce them to take on the capital costs and the ongoing operating costs. In other words, they can bank some of the one-shot subsidy and use it to cover the difference between revenues and operating costs.

This modest proposal does not address all aspects of the universal service fund. But it would achieve a clear objective — bringing broadband to rural areas — while allowing the FCC to extricate itself from the business of distributing $4.6 billion a year in subsidies. Let’s see a timetable for withdrawal!

There are few things I find more annoying in the Net neutrality wars than the silly assertion by groups like Free Press and other regulatory radicals that “Net neutrality is the Internet’s First Amendment.”  It’s utter rubbish as I have documented here many times before.  But now Sen. Al Franken is running around sputtering such nonsense, as he did in this recent CNN.com editorial, claiming that “Net neutrality is foremost free speech issue of our time.”   The folks at CNN invited me to response and below you will find the piece PFF press director Mike Wendy and I submitted.


Big Government the Real Threat to Internet

by Adam Thierer & Mike Wendy

In his recent CNN.com opinion piece, “Net neutrality is foremost free speech issue of our time,” Sen. Al Franken claims that “our free speech rights are under assault — not from the government but from corporations seeking to control the flow of information in America.”

He alludes to potential corporate blocking of online products and speech and says, “If that scares you as much as it scares me, then you need to care about net neutrality.”

Chicken Little, call your office!

Such sky-is-falling scare tactics are all too common in the heated debate over net neutrality regulation, but actual evidence of such nefarious corporate scheming is nowhere to be found. Perhaps that’s why Franken resorts to such tall tales.

Moreover, his reading of the First Amendment is at odds with the one most of us learned about in civics class (“Congress shall make no law…”). His would empower regulators by converting the First Amendment from a shield against government action into a sword that bureaucrats could wield against private industry. Continue reading →

While on vacation last week, I finished up a few new cyber-policy books and one of them was  Cyber War: The Next Threat to National Security and What to Do About It by Richard A. Clarke and Robert K. Knake.  The two men certainly possess the right qualifications for a review of the subject.  Clarke was National Coordinator for Security, Infrastructure Protection, and Counterterrorism during the Clinton years and also served in the Reagan and two Bush administrations. Knake is an international affairs fellow at the Council on Foreign Relations where he specializes in cybersecurity.

Clarke and Knake’s book is important if for no other reason than, as they note, “there are few books on cyber war.” (p. 261) Thus, their treatment of the issue will likely remain the most relevant text in the field for some time to come.

They define cyber war as “actions by a nation-state to penetrate another nation’s computers or networks for the purposes of causing damage or disruption” (p. 6) and they argue that such actions are on the rise.  And they also claim that the U.S. has the most to lose if and when a major cyber war breaks out, since we are now so utterly dependent upon digital technologies and networks.

At their best, Clarke and Knake walk the reader through the mechanics of cyber war, who some of the key players and countries are who could engage in it, and identify what the costs of such of war would entail.  Other times, however, the book suffers from a somewhat hysterical tone, as the authors are out here not just to describe cyber war, but to also issue a clarion call for regulatory action to combat it.  Ryan Singel of Wired, for example, has taken issue with the book’s “doomsday scenario that stretches credulity” and claims that “Like most cyberwar pundits, Clarke puts a shine on his fear mongering by regurgitating long-ago debunked hacker horror stories.”  Bruce Schneier and Jim Harper have raised similar concerns elsewhere.

Continue reading →

The White House and the Federal Communications Commission have painted themselves into a very tight and very dangerous corner on Net Neutrality.  To date, a bi-partisan majority of Congress, labor leaders, consumer groups and, increasingly, some of the initial advocates of open Internet rules are all shouting that the agency has gone off the rails in its increasingly Ahab-like pursuit of an obscure and academic policy objective.

Now comes further evidence, none of it surprising, that all this effort has been a fool’s errand from the start.  Jacqui Cheng of Ars Technica is reporting today on a new study from Australia’s University of Ballarat that suggests only .3% of file sharing using the BitTorrent protocol is something other than the unauthorized distribution of copyrighted works.  Which is to say that 99.7% of the traffic they sampled is illegal.  The Australian study, as Cheng notes, supports similar conclusions of a Princeton University study published earlier this year

Continue reading →

If I ever had any hope of “keeping up” with developments in the regulation of information technology—or even the nine specific areas I explored in The Laws of Disruption—that hope was lost long ago.  The last few months I haven’t even been able to keep up just sorting the piles of printouts of stories I’ve “clipped” from just a few key sources, including The New York Times, The Wall Street Journal, CNET News.com and The Washington Post.

 

I’ve just gone through a big pile of clippings that cover April-July.  A few highlights:  In May, YouTube surpassed 2 billion daily hits.  Today, Facebook announced it has more than 500,000,000 members.   Researchers last week demonstrated technology that draws device power from radio waves.

Continue reading →