December 2009

In a letter to the editor of the Washington Post last week, former FTC Commissioner Thomas Leary responded to a Post article describing the FTC’s suit against Intel as a  “major step for President Obama,” consistent with his campaign promise to “reinvigorate antitrust enforcement.”  Leary responded indignantly to this characterization by declaring:

People seem to forget that the FTC is a bipartisan independent agency.

As a Republican FTC commissioner appointed by a Democratic president, I can vouch for the agency’s independence. During my service from 1999 to 2005 in the administrations of presidents Bill Clinton and George Bush, I never received a direct or indirect policy recommendation on a pending matter from anyone in the White House or from any of the people in Congress who had actively supported me.

Leary’s leeriness about political encroachment on the FTC illustrates the depth of abiding faith in independent regulatory agencies as standing truly apart from the day-to-day politics of Washington—especially when the might of the regulatory state is being wielded against a particular company in quasi-judicial prosecutions, such as antitrust enforcement actions. But if the independence of the FTC is this important, what about the independence of the Federal Communications Commission, with its broad jurisdiction over the media and tools of free speech?

Leary would probably be appalled at the politicization of the FCC in recent years. Bush’s second FCC chairman, Kevin Martin, was infamous for his political Machiavellianism and widely despised by the communications law bar. By contrast, when it became clear that Obama’s high-tech advisor Julius Genachowski would succeed Martin as FCC Chairman shortly before Obama’s inauguration, he received a chorus of applause from a wide range of philosophical perspectives, including from our former president at PFF, Ken Ferree:

Julius Genachowski is an outstanding choice to chair the Commission.  He is knowledgeable, experienced, and presumably will have the ear of the most influential people within the Administration.

While no one would compare the eminently likable Genachowski to Martin, his relationship to the Obama administration appears unprecedented in its closeness, and one must ask whether that’s a good thing for the head of a supposedly “independent” regulatory agency or integrity of that agency’s decision-making. Continue reading →

book stackSo, did the decade just end or do we have another year to go? Honestly, I’ve never understood when the cut-off is from one decade to the next. (My friend Larry Magid struggles with the same question in his recent column on “The Decade in Technology.”) Nonetheless, I’ve seen a lot of best-of-decade lists published recently, so I thought I would throw my own out there even though it is still a work in progress.

I have been attempting to compile the definitive bibliography for our digital decade—the definitive list of Internet policy books, that is. I started throwing this together two years ago when I was penning my list of “The Most Important Internet Policy Books of 2008” and continued to work on it as I was finishing up my 2009 installment as well. I grabbed every book off my shelf that dealt with the future of the Internet and the impact the Digital Revolution is having on our lives, culture, and economy and threw the title and a link onto this list. (I’m also using the list to help structure my thoughts for a forthcoming book of my own on Internet Optimists vs. Pessimists, something I’ve been writing a lot about here in recent years.)

Below you will find what I’ve got so far. There are around 80 90 books on the list. I’ve divided the list by year, but you may be wondering what determined the order the books appear in. In essence, I’ve listed what I feel are the 1 or 2 most important titles first and then just added others randomly. Eventually, I plan to post a “Most Important Internet Policy Books of the Decade” list outlining which titles I believe have been the most influential. I suspect I’ll name Benkler’s Wealth of Networks to the top slot followed closely by Zittrain’s The Future of the Internet, Lessig’s Free Culture, and Chris Anderson’s The Long Tail. Anyway, that’s for another day.

For now, I would just like to ask for reader suggestions regarding what other titles that should appear on this list. I will add titles as they come in. I want to stress, however, that I am trying to keep this list limited to books that have something to say about Internet policy (cyber-law, digital economics, information technology politics, etc).

I hope others find this useful.  And yes, I have read all* most of the books on this list!  As I’ve noted here before, I’m a bit of book nerd.  (*Now that I’ve received so many helpful additions to the list, there are some titles on the list I have not had a chance to read through yet). Continue reading →

My friend Larry Magid, a technology columnist for CBS and others, has a wonderful new column out about “The Decade in Technology.”  You have to read it to appreciate just how far we have come in such a short time. Larry notes:

[T]he past 10 years were a momentous period for technology.  Not only was there no iPhone a decade ago, there was hardly anything that could be considered a smartphone. The BlackBerry was introduced in 1999, when the well-heeled techno-savvy were carrying around flip phones. That year, 1999, was the height of the dot-com boom. But when you look back at it, the online world was nothing like it is today. There was no Facebook (founded in 2004) or Twitter (2007). Even MySpace wasn’t founded until 2003. The term Web 2.0 hadn’t been coined and most people who were online used the Web mostly to consume information. Those with the skills and resources to post to the Web were called “Webmasters.” Today, everyone with a Facebook account is a master of his or her own Web.

I tried to document the incredible technological changes in my own life over the past decade in this essay I penned on Super Bowl Sunday last February: “10 Years Ago Today… (Thinking About Technological Progress).”

Larry also notes that giants came and went as technology continued to evolve in unexpected ways:

Ten years ago AOL was the most popular Internet service provider and was so successful that it was able to purchase media giant Time Warner in January 2000 for $182 billion in stock. But the marriage didn’t make it through the decade. The two companies formally split up this month, with AOL, once again, being traded on the New York Stock Exchange as a separate company. AOL thrived in the ’90s because people were using the service to go online via phone. Today most American homes have broadband.

That’s something I wrote about at length in my recent paper on “A Brief History of Media Merger Hysteria.”  Anyway, read Larry’s entire piece. It really drives home how lucky we are to be living in the midst of such at technological renaissance and information cornucopia.

I was reminiscing last night with my Cato Institute colleague Dan Mitchell about a favorite TLF post of mine: the Persuade-o-Meter. Woo! I slay me!

Dan is very excited about the blue curtain that Santa Claus brought him for Christmas. It matches the ties of his two favorite recent presidents. And he made this video to show it off.

As early as 1990, telecom industry observers speculated about the shift away from traditional circuit-switched telephony to “Voice Over IP” (VoIP). By the late 1990s, Internet industry observers began using the term “Everything Over IP” (VoIP) to describe the ongoing and seemingly inevitable shift towards Internet distribution of not just voice, but all forms of, audio, text and multi-media content. Today, term has become a victim of its own success:  “Of course, ‘everything’ is delivered over IP. How else would you do it?”

While this capitalist success story is among the greatest technological triumphs of our time, a similar rhetorical pattern is, unfortunately, playing out in very different arena of Regulatory Creep. The crusade for “net neutrality”  is metastasizing before our very eyes into a broader holy war for regulating “Everything” (EoIP) in the name of “protecting neutrality.” The next target is clear: search engines Google—as an op-ed in today’s New York Times makes crystal clear. Adam Thierer and I warned about this escalation of efforts to get government more involved in regulating Internet back in October in a PFF paper entitled Net Neutrality, Slippery Slopes & High-Tech Mutually Assured Destruction:

If Internet regulation follows the same course as other industries, the FCC and/or lawmakers will eventually indulge calls by all sides to bring more providers and technologies “into the regulatory fold.” Clearly, this process has already begun. Even before rules are on the books, the companies that have made America the leader in the Digital Revolution are turning on each other in a dangerous game of brinksmanship, escalating demands for regulation and playing right into the hands of those who want to bring the entire high-tech sector under the thumb of government—under an Orwellian conception of “Internet Freedom” that makes corporations the real Big Brother, and government, our savior.

Today’s editorial is only small dose of what’s to come. The floodgates will really open and let forth a great gushing rage of demands for sweeping regulation of the entire Internet under the banner of neutrality when the deadlines pass in the FCC’s “net neutrality” NPRM (comments due January 14, 2010; reply comments due March 5). Continue reading →

“Search Neutrality”

by on December 28, 2009 · 10 comments

Google is  wrong to seek public utility regulation of ISPs, but it is just as wrong for others to seek public utility regulation of Google.

The founder of a would-be Google competitor or spurned search engine optimizer (I can’t tell which and won’t credit his site with a link) takes to the pages of the New York Times to argue for “search neutrality.”

Though good ironic comeuppance for Google, “search neutrality” regulation would ossify an innovative business and deprive consumers of the benefits of competition.

Happily, responses seem to be clustering around derision for the idea and criticism of the Times for publishing it.

Thanks to Jim for providing a great analysis of Jonathan Rosenberg’s “The Meaning of Open” from Google’s Policy Blog.  I wanted to throw in my two cents without derailing the comments on Jim’s post.  I hope you’ll this new thread of discussion interesting.

While I enjoyed reading Rosenberg’s post and found myself nodding in agreement with many if not most of his points, it would have been nice if Rosenberg were a little less cheeky about this close/open symbiosis that is the real defining quality of Google.  Rather than dismissing the closed nature of Google’s search/ad business with these lines:

The search and advertising markets are already highly competitive with very low switching costs, so users and advertisers already have plenty of choice and are not locked in. Not to mention the fact that opening up these systems would allow people to “game” our algorithms to manipulate search and ads quality rankings, reducing our quality for everyone.

Both of these arguments have some merit as explanations for why Google’s search/ad business isn’t open-source or an “open system,” but neither serve as a reason to grant Google an exemption from Rosenberg’s “open systems win” credo.

Instead of prescribing that the rest of the world adopt total openness, Rosenberg could have taken a more nuanced position, leaving room for the kind of proprietary money-makers Google relies on and that we’re not likely to see disappear from the software world anytime soon, if ever.  This sort of model, one which harnesses the profit-making potential of closed systems while funding satellite projects that take advantage of the iterative, peer-reviewed process of  open-source development is fascinating and makes for a much more interesting conversation than Rosenberg’s simplistic open-only philosophy.

Still, I think Google needs some defending and their business model/philosophy deserves to be looked at for what it really is, not what it is presented it to be.

Continue reading →

It may be possible to wring consistency from the “open” manifesto Google SVP of Product Management Jonathan Rosenberg published earlier this week, but I can’t.

He correctly extols the virtues of openness in technology and data for its pro-competitive effects. Closed systems may be profitable in the short run, but they are weak innovation engines:

[A] well-managed closed system can deliver plenty of profits. They can also deliver well-designed products in the short run — the iPod and iPhone being the obvious examples — but eventually innovation in a closed system tends towards being incremental at best (is a four blade razor really that much better than a three blade one?) because the whole point is to preserve the status quo. Complacency is the hallmark of any closed system. If you don’t have to work that hard to keep your customers, you won’t.

But his paean to openness draws a tight line around Google’s profitable products: Continue reading →

Gee, if only the technology sector weren’t so gosh-darn static and slow-to-change, maybe we wouldn’t need government to keep tinkering with the market to make sure big, bad incumbents didn’t reign on high, oppressing us with their monopolistic control of our cyber-lives. But since the Big just keep getting bigger and “network effects” make it impossible for new competitors to get in the game, it’s a good thing we have so many Federal agencies looking out for us poor consumers (FCC, FTC, DOJ, NTIA, etc.) with antitrust interventions, common carriage mandates and 1000 other regulatory “tweaks”—not to mention all those oh-so-tech-savvy state legislators and attorneys general, always eager to leap into action!  “Fire, ready, aim, boys!

I mean after all, it’s only a matter of time before Time Warner/AOL uses their combined $100 billion might as “gatekeepers” to digitally enslave us all, right?  Oh, wait…

Uh, yeah, well never mind… As Adam and I have noted: Continue reading →

With weather-related travel trauma so prominent on my Twitterscope, and with news that the federal government is banning flight delays, I stopped short when I read this techology pitch:

One of the biggest hassles of travel has to be keeping track of those pesky hotel key cards and then trying to remember which way to fit the darned things in the wide variety of door locks. But that may soon change.

New technology’s been introduced and will soon be test marketed in Las Vegas hotels that allows guests to use their cell phones — any cell phone model at all — to unlock their hotel room door.

I’m not persuaded at all. The difficulty of managing hotel keys doesn’t even rate on my list of travel hassles.

The solution offered up is:

a simple system in which a computer generates a unique series of tones (that sounds kind of like those digitized cell phone ringtones used early this decade) that is then sent to the mobile device. When the tone is played outside the designated guestroom, a microphone incorporated in the locking system IDs the tone and unlocks the door.


There might be value to this technology or (more probably) others like it. Getting secure credentials onto people’s phones has a lot of promise.

But this iteration? Should it survive testing, and the easily imaginable failure modes and attacks on it, it might provide a scintilla of convenience in hotels.