February 2010

Railroading Broadband?

by on February 18, 2010 · 0 comments

FCC Chairman Julius Genachowski’s comparison of broadband with electricity in a speech this week has generated mixed reviews in the blogosphere. Manny Ju says that this shows Genachowski “gets it” — that he understands the transformational power of broadband and how it will come to be regarded as a ubiquitous necessity in the years ahead. Scott Cleland is more alarmed: “The open question here is electricity transmission is regulated as a public utility. Is the FCC Chairman’s new metaphor intended to extend to how broadband should be regulated?”

It may surprise some technophiles, but this kind of discussion even predates electricity. The advent of the railroads in the 19th century brought similar arguments.  Railroads were usually a heck of a lot cheaper way of hauling goods and people across land than the next best alternative at the time: wagons. Railroads were “The Next Big Thing” that no town could do without — especially if the town lacked access to navigable waters. Lawmakers handed out subsidies (often in the form of land grants), then regulated railroads to control perceived abuses, such as discriminatory pricing for different kinds of traffic or traffic between different locations. Henry Carter Adams, the godfather of economic regulation in the U.S., said all shippers deserved “equality before the railroads.” Even today, commentators lament the rural towns that people abandoned because they lacked rail access. Deja vu all over again! 

As long as we’re deja-vuing, let’s remember a few little problems America encountered down the railroad regulatory track:

1. Subsidies created “excess capacity” — that is, more capacity than customers were willing to pay for. In some cases, subsidies attracted shady operators into the railroad business whose main goal was to get land grants or sell diluted stock offerings to the public, not build and operate railroads. 

2. Regulation ended up caretlizing railroads and propping up rail rates, which faced downward pressure because of the excess capacity.

3. When another low-cost, convenient alternative (trucking) came along in the 1930s, truckers got pulled into the cartel when they too were placed under Interstate Commerce Commission regulation to keep them from undercutting rail rates.

4. Despite cartelization, by the late 1970s, 21 percent of the nation’s railroad track was operated by bankrupt railroads, even though the railroads had shed unprofitable passenger service to Amtrak earlier in the decade. Part of the reason was excessive costs: Because access to freight rail service was still considered a right, regulation prevented railroads from abandoning money-losing lines. Part of the reason was restraints on competition: The regulatory passion for “fair” pricing kept railroads from competing aggressively with each other or with truckers. When the Southern Railway introduced its 100-ton “Big John” grain hopper cars in the 1960s, for example, it couldn’t offer shippers lower rates in exchange for high volume until it appealed an Interstate Commerce Commission all the way to the Supreme Court.

By the late 1970s, a Democratic president, a bipartisan majority in Congress, and economists across the political spectrum agreed that railroad regulation needed a radical overhaul. Regulatory reforms made it easier for railroads to abandon unprofitable service, in many cases turning track over to new, lower-cost short lines and regional railroads. Prices for more than 90 percent of rail traffic were effectively deregulated. At the same time, Congress deregulated rates and entry on interstate trucking routes. This encouraged rail-truck competition and also allowed each mode to specialize in serving those markets it could serve at lowest cost.

Rail rates fell, and railroads came out of bankruptcy. The current system is hardly perfect, but most economic research suggests that most consumers, shippers, and railroads are much better off now than they were under the old regulatory system.  (For reviews of scholarly research on this, check out Clifford Winston’s paper here  or my article here.)

Will we repeat the cycle with broadband? I don’t know, but to this railfan, the current broadband debate is looking soooo retro — as in 19th century!

Over at “Convergences,” I write on the origins of the idea of a “public option” for health insurance. In part, I note:

At a superficial level, the “public option” for health care is both appealing and puzzling. From a competition policy standpoint, the entry into the market of a subsidized competitor offering a wide array of benefits certainly might put downward pressure on prices as well as easing humanitarian concerns about access. Equally obvious, though, are objections. What mechanism of accountability would exist to ensure that this subsidized entity is well run? It cannot be allowed to go bankrupt; nor is it likely that unhappy customers would have much leeway in suing it. How would it avoid driving private insurers out of the market for low-end service entirely? How much of a subsidy would it get, and how is this to be funded?

Since the party and administration that sponsored this proposal are associated with the intelligentsia, however, people hoping to improve the health care system probably felt entitled to trust that these questions had good answers. Somewhere, someone deep in the bowels of the brain trust had considered these issues. Curious about this, I found myself reading one of the more serious works to address the public option, a paper by Randall D. Cebul, James B. Rebitzer, Lowell J. Taylor and Mark E. Votruba entitled, “Unhealthy Insurance Markets: Search Frictions and the Cost and Quality of Health Insurance,” identified as NBER Working Paper No. 14455, from October 2008.

Read my whole piece, here.

At the FTC’s second Exploring Privacy roundtable at Berkeley in January, many of the complaints about online advertising centered on how difficult it was to control the settings for Adobe’s Flash player, which is used to display ads, videos and a wide variety on other graphic elements on most modern webpages, as well the potential for unscrupulous data collectors to “re-spawn” standard (HTTP) cookies even after a user deleted them simply by referencing the Flash cookie on a user’s computer from that domain—thus circumventing the user’s attempt to clear out their own cookies. Adobe to the first criticism by promising to include better privacy management features in Flash 10.1 and by condemning such re-spawning and calling for “a mix of technology tools and regulatory efforts” to deal with the problem (including FTC enforcement). (Adobe’s filing offers a great history of Flash, a summary of its use and an introduction to Flash Cookies, which Adam Marcus detailed here.)

Earlier this week (and less than three weeks later), Adobe rolled out Flash 10.1, which offers an ingenious solution to the problem of how to manage flash cookies: Flash now simply integrates its privacy controls with Internet Explorer, Firefox and Chrome (and will soon do so with Safari). So when the user turns on “private browsing mode” in these browser, the Flash Cookies will be stored only temporarily, allowing users to use the full functionality of the site, but the Flash Player will “automatically clear any data it might store during a private browsing session, helping to keep your history private.” That’s a pretty big step and an elegantly simple to the problem of how to empower users to take control of their own privacy. Moreover:

Flash Player separates the local storage used in normal browsing from the local storage used during private browsing. So when you enter private browsing mode, sites that you previously visited will not be able to see information they saved on your computer during normal browsing. For example, if you saved your login and password in a web application powered by Flash during normal browsing, the site won’t remember that information when you visit the site under private browsing, keeping your identity private.

Continue reading →

Clearly many groups contend there’s a “crisis” in journalism, even to the extent of advocating government support of news organizations, despite the dangers inherent in the concept of government-funded ideas and their impact on critique and dissent. 

Georgetown is hosting a conference today called “The Crisis In Journalism: What should Government Do,” (at which Adam Thierer is speaking), with the defining question, “How can government entities, particularly the Federal Trade Commission and the Federal Communications Commission, help to form a sustainable 21st century model for journalism in the United States?”

We actually resolved the question of “What Government Should Do,” Continue reading →

Yesterday fellow TLFers Jim Harper and Berin Szoka joined me for an episode of the Surprisingly Free Conversations podcast in which we discussed the buzz around Google Buzz. You can listen to it here. You might also want to check out our other recent episodes, which include:

Check them all out and subscribe at the podcast page.

This morning I spoke at a Georgetown Center for Business and Public Policy event on, “The Crisis in Journalism: What Should the Government Do?” The panel also included Steven Waldman, senior advisor to FCC Chairman Julius Genachowski, who is heading up the FCC’s new effort on “The Future of Media and the Information Needs of Communities in a Digital Age; Susan DeSanti, Director of Policy Planning at the Federal Trade Commission. (The FTC has also been investigating whether journalism will survive the Internet age and what government should do about it); and Andy Schwartzman, President of the Media Access Project. Mark MacCarthy of Georgetown Univ. moderated the discussion.  Here’s the outline of my remarks. I didn’t bother penning a speech. [Update: Video is now online, but not embeddable and sound is bad.]

____________

What Funds Media? Can Government Subsidies Fill the Void?

1)      Public media & subsidies can play a role, but that role should be tightly limited

  • Should be focused on filling niches
  • bottom-up (community-based) efforts are probably better than top-down proposals, which will probably end up resembling Soviet-style 5 year plans
  • regardless, public subsidies should not be viewed as a replacement for traditional private media sources
  • And I certainly hope we are not talking about a full-blown “public option” for the press along the lines of what Free Press, the leading advocate of some sort of government bailout for media, wants.

2)      Indeed, public financing would not begin to make up the shortfall from traditional private funding sources

Continue reading →

Ryan Radia brought to my attention this excellent Slate piece by Vaughan Bell entitled, “Don’t Touch That Dial! A History of Media Technology Scares, from the Printing Press to Facebook.” It touches on many of the themes I’ve discussed here in my essays on techno-panics, fears about information overload, and the broader optimists v. pessimist battle throughout history regarding the impact of new technologies on culture, life and learning. “These concerns stretch back to the birth of literacy itself,” Bell rightly notes:

Worries about information overload are as old as information itself, with each generation reimagining the dangerous impacts of technology on mind and brain. From a historical perspective, what strikes home is not the evolution of these social concerns, but their similarity from one century to the next, to the point where they arrive anew with little having changed except the label.

Quite right. And Bell’s essay reminds us of this gem from the great Douglas Adams about how bad we humans are at putting technological change in perspective:

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.

So true, and I wish I would have remembered it before I wrapped up my discussion about “adventure windows” in the review of Jaron Lanier’s new book, You Are Not a Gadget, which I published last night. As I noted in that essay:

Our willingness to try new things and experiment with new forms of culture—our “adventure window”—fades rapidly after certain key points in life, as we gradually get set in our ways. Many cultural critics and average folk alike always seem to think the best days are behind us and the current good-for-nothing generation and their new-fangled gadgets and culture are garbage.

Continue reading →

Just a reminder about tomorrow’s Georgetown Center for Business and Public Policy event on, “The Crisis in Journalism: What Should the Government Do?” It will be held at 9:30am tomorrow at the Newseum (Knight Conference Center) located at 555 Pennsylvania Ave here in Washington, DC. Breakfast will be served. (You can RSVP please by emailing: cbpp@msb.edu).  Here’s the event description:

This roundtable discussion will bring together academics, government officials and industry leaders to consider the future of the journalism industry. Specifically, what does a future economic model for the journalism industry look like? What is the role of new media in modern journalism? How can news papers integrate web-based news into their business models? How can government entities, particularly the Federal Trade Commission and the Federal Communications Commission, help to form a sustainable 21st century model for journalism in the United States?

Mark MacCarthy of Georgetown Univ. will moderate the panel, which includes: Continue reading →

Of the many tech policy-related books I’ve read in recent years, I can’t recall ever being quite so torn over one of them as much as I have been about Jaron Lanier‘s You Are Not a Gadget: A Manifesto.  There were moments while I was reading through it when I was thinking, “Yes, quite right!,” and other times when I was muttering to myself, “Oh God, no!”

The book is bound to evoke such strong emotions since Lanier doesn’t mix words about what he believes is the increasingly negative impact of the Internet and digital technologies on our lives, culture, and economy. In this sense, Lanier fits squarely in the pessimist camp on the Internet optimists vs. pessimists spectrum. (I outlined the intellectual battle lines between these two camps my essay, “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society.”) But Lanier is no techno-troglodyte. Generally speaking, his pessimism isn’t as hysterical in tone or Luddite-ish in its prescriptions as the tracts of some other pessimists.  And as a respected Internet visionary, a gifted computer scientist, an expert on virtual reality, and a master wordsmith, the concerns Lanier articulates here deserve to be taken seriously— even if one ultimately does not share his lugubrious worldview.

On the very first page of the book, Lanier hits on three interrelated concerns that other Net pessimists have articulated in the past:

  1. Loss of individuality & concerns about “mob” behavior (Lanier: “these words will mostly be read by nonpersons–automatons or numb mobs composed of people who are no longer acting as individuals.”)
  2. Dangers of anonymity (Lanier: “Reactions will repeatedly degenerate into mindless chains of anonymous insults and inarticulate controversies.”)
  3. “Sharecropper” concern that a small handful of capitalists are getting rich off the backs of free labor (Lanier: “Ultimately these words will contribute to the fortunes of those few who have been able to position themselves as lords of the computing clouds.”)

Again, others have tread this ground before, and it’s strange that Lanier doesn’t bother mentioning any of them. Neil Postman, Mark Helprin, Andrew Keen, and Lee Siegel have all railed against the online “mob mentality” and argued it can be at least partially traced to anonymous online communications and interactions. And it was Nick Carr, author of The Big Switch, who has been the most eloquent in articulating the “sharecropper” concern, which Lanier now extends with his “lords of the computing clouds” notion. [More on that towards the end.] Continue reading →

See my new commentary at CircleID — “How to Manage Internet Abundance”:

The Internet has two billion global users, and the developing world is just hitting its growth phase. Mobile data traffic is doubling every year, and soon all four billion mobile phones will access the Net. In 2008, according to a new UC-San Diego study, Americans consumed over 3,600 exabytes of information, or an average of 34 gigabytes per person per day. Microsoft researchers argue in a new book, “The Fourth Paradigm,” that an “exaflood” of real-world and experimental data is changing the very nature of science itself. We need completely new strategies, they write, to “capture, curate, and analyze” these unimaginably large waves of information.

As the Internet expands, deepens, and thrives—growing in complexity and importance—managing this dynamic arena becomes an ever bigger challenge. Iran severs access to Twitter and Gmail. China dramatically restricts individual access to new domain names. The U.S. considers new Net Neutrality regulation. Global bureaucrats seek new power to allocate the Internet address space. All the while, dangerous “botnets” roam the Web’s wild west. Before we grab, restrict, and possibly fragment a unified Web, however, we should stop and think. About the Internet’s pace of growth. About our mostly successful existing model. And about the security and stability of this supreme global resource.

Continue reading →