January 2009

Jerry Yang’s departure as Yahoo! CEO opens the door to a renewed bid by Microsoft to buy Yahoo!’s search business (or Yahoo! itself).  Such a merger could produce a significantly stronger challenger to Google in the search market.  With this possibility in mind, the WSJ just ran a fascinating history of the “paid search” The search marketbusiness—the placement of “contextually targeted” ads next to search engine results based on the search terms that produced those results.

In a nutshell, Microsoft failed to see (back in 1998-2003) the enormous potential of paid search—just as small start-ups (such as Google) were starting to develop the technology and business model that today account for a $12+ billion/year industry, which is twice the size of the display ad market and which supports a great deal of the online content and services we have all come to take for granted online.  Microsoft first put its toe in the water of paid search with a small-scale partnership with Goto.com in 1999-2000.  But this partnership failed because of internal resistance from the managers of Microsoft’s display-ad program.  In 2000, Google launched Adwords and thus began its transformation from start-up into economic colossus.  By 2002, Microsoft realized that it needed to catchup fast, and approached Goto.com (by then renamed Overture) about a takeover.  But Microsoft ultimately chose in 2003 not to buy the startup because  Bill Gates and Steve Ballmer “balked at Overture’s valuation of $1 billion to $2 billion, arguing that Microsoft could create the same service for less.” 

Microsoft, meanwhile, spent the next 18 months deploying hundreds of programmers to build a search engine and a search-ad service, which it code-named Moonshot. The company launched its search engine in late 2004 and its search-ad system in May 2006.

But Microsoft’s ad system came too late:

Advertisers applauded Moonshot for its technical innovation. But Microsoft had trouble coaxing people to migrate to its search engine from Google; advertisers were unwilling to spend large sums on MSN’s search ads. By building a new system instead of buying Overture, Mr. Mehdi says, “we really delayed our time to market.”

What’s most fascinating about the piece is that it seems to suggest that Microsoft missed its opportunities to get into paid search not because it was “dumb,” “uninnovative” or a “bad” company, but for the same sorts of reasons that big, highly successful and even particularly innovative companies fail.  The reasons companies generally succeed in mastering “adaptive” innovation of the technologies behind their established business models are the very reasons why such great companies struggle to encourage or channel the “disruptive” innovation that renders their core technologies and business models obsolete.   Continue reading →

As Berin and I have noted here before (here and here), there seems to be no shortage of competition and innovation in the mobile operating system (OS) space. We’ve got:

  1. Apple’s iPhone platform,
  2. Microsoft’s Windows Mobile,
  3. Symbian,
  4. Google’s Android,
  5. BlackBerry,
  6. Palm OS (+ Palm’s new WebOS),
  7. the LiMo platform, and
  8. OpenMoko.

I am missing any? I don’t think so. Even if I have, this is really an astonishing degree of platform competition for a network-based industry. Network industries are typically characterized by platform consolidation over time as both application developers and consumers flock to just a couple of standards — and sometimes just one — while others gradually fade away. But that has not yet been the case for mobile operating systems.  I just can’t see it lasting, however. As I argued in my essay on “Too Much Platform Competition?,” I would think that many application providers would be clamoring for consolidation to make it easier to develop and roll out new services.  Some are, and yet we still have more than a half-dozen mobile OS platforms on the market.

Regardless, the currently level of platform competition also seems to run counter to the thesis set forth by Jonathan Zittrain and others who fear the impending decline or death of digital “generativity.” That is, technologies or networks that invite or allow tinkering and all sorts of creative uses are supposedly “dying” or on the decline because companies are trying to exert more control over proprietary or closed systems.
Continue reading →

In early December, Jerry Brito asked whether Obama’s proposal to create the post of  Chief Technology Officer (CTO) should be feared or welcomed:

I think the question turns on whether this person will be CTO of the United States or CTO of the U.S. Federal Government. While I personally believe the former should be feared, the latter should be welcomed.

I agree completely—and it now seems that this is in fact where the incoming Administration is heading.  BusinessWeek reports that the Obama Administration has narrowed its choices down to two Indian-American CTOs:

  • Vivek Kundra, D.C.’s CTO
  • Padmasree Warrior, Cisco’s CTO

Judging by BusinessWeek’s short descriptions, both candidates sound terrifically well-qualified to lead implementation of Obama’s oft-repeated promises to bring the United States government into the Web 2.0 era.  More importantly, the fact that the two likely candidates are CTOs—rather than, say, advocates of any particular technology policy agenda—strongly suggests that the Obama administration isn’t contemplating giving the CTO authority to set technology policy outside the Federal government.  

Whomever Obama chooses in the end will have his or her work cut out for them.  While free marketeers may indeed have much to fear from Obama’s technology policy agenda in terms of over-regulation, increased government control and market-distorting subsidies, e-government is one area where we ought to be able to cheer the new President on:   The Federal government could be made much more transparent and democratically accountable if Federal agencies simply adopted some of the tools users take for granted on private websites-such as RSS feeds and standardized data. 

Let’s just hope that Obama makes it very clear in creating the CTO post that its responsibilities are indeed strictly limited directing adoption of information technology inside the Federal government, so that the position doesn’t mushroom into the more powerful “Technology Czar” some rightly fear.

Gamepolitics.com reports on a new South Carolina bill that proposes to outlaw public profanity. The measure, S. 56, stipulates that:

It is unlawful for a person in a public forum or place of public accommodation wilfully and knowingly to publish orally or in writing, exhibit, or otherwise make available material containing words, language, or actions of a profane, vulgar, lewd, lascivious, or indecent nature. A person who violates the provisions of this section is guilty of a felony and, upon conviction, must be fined not more than five thousand dollars or imprisoned not more than five years, or both.

Let’s ignore the free speech issues in play here — although they are numerous — and ask the 3 questions I increasingly put front and center in everything I write about modern censorship efforts: (1) How do they plan to enforce it? (2) Is there really any chance of such a law being even remotely successful? (3) How onerous or intrusive will it be to attempt to do so?

We can imagine a few examples of where such a law could create serious challenges. For example, how would law enforcement officials deal with public swearing at ball games and other sporting events? And not just in the crowd but from the players. Have you ever heard Tiger Woods after he misses a close putt? Wow. He might want to avoid the next tournament in South Carolina!  But how about profanities uttered in other public places, like hospitals during painful procedures? Man, you should have heard the profanities my wife was letting loose when our two kids were arriving in this world. Would have made George Carlin blush!  And how about the halls of Congress? Oh my, now there is an education in sailor talk, although perhaps less so with VP Cheney departing. I’m sure that more than a few choice profanities have been tossed about down in the South Carolina statehouse at times. And there are lots of other cases where enforcement would be challenging: bars, concerts, comedy shows, political protests, etc.

Bottom line: We are talking about a lot of fines and jail time down in the Palmetto State!

Look, I’m as uncomfortable with excessive public profanity as anyone else when my kids are around. But I talk to them about it because I know it will never go away.When we are surrounded by foul-mouthed hooligans at ballgames, I frequently remind my kids that it’s not appropriate to say such things and that only “stupid people use stupid words.” Of course, Dad has been known to use his share of “stupid words” at home at times, too.  My daughter recently threatened to “tell Mommy” on me when I hit my thumb with a hammer and let a choice word fly. Hey, it happens. It’s not the end of the world. We can try to change our habits and teach our kids better manners. But I doubt laws like the one South Carolina is considering will really make much of a difference.

It has been suggested that the American wireless market is a “textbook oligopoly” in which the four national carriers have little incentive to innovate or further reduce prices. I’m more sympathetic to this argument than some libertarians, but over at Techdirt Carlo offers some evidence that competition is alive and well in the wireless marketplace. For a while, the national carriers have offered unlimited voice and text messaging services for around $100/month. Carlo notes that a couple of regional carriers that focus on the low end of the market and have less comprehensive coverage maps have started offering unlimited connectivity for as little as $50/month. The latest development is that Sprint’s Boost Mobile unit is joining the $50/month flat rate club.

Jim Harper has made the point in the wired broadband market, but it deserves to be made here too: competition happens along multiple dimensions. Consumers have different trade-offs between price and quality, and so products with different feature sets and price points often compete directly with one another. There may be only four national carriers, and the regional carriers may not be able to offer service that the typical consumer finds comparable to the offerings of the national networks, but that doesn’t mean the regional carriers are irrelevant. Offering a bargain option at the low end of the market really does put pressure on the margins of the tiers above them. As long as there are some AT&T and Verizon customers who would be willing to put up with spotty coverage in exchange for a lower phone bill, AT&T and Verizon will have an incentive to cut their prices over time.

Of course, we could use more wireless competition. But we also shouldn’t lose sight of how much good the spectrum that’s already been auctioned off has done. It’s hard to create competitive telecom markets. For all of its flaws, the mobile industry is a real success story. And the solution to the flaws is to continue what we started 15 years ago: auctioning off more spectrum and creating real property rights in the airwaves.

ISTTF coverThe Internet Safety Technical Task Force (ISTTF), which was formed a year ago to study online safety concerns and technologies, today issued its final report to the U.S. Attorneys General who authorized its creation. It was a great honor for me to serve as a member of the ISTTF and I believe this Task Force and its report represent a major step forward in the discussion about online child safety in this country.

The ISTTF was very ably chaired by John Palfrey, co-director of Harvard University’s Berkman Center for Internet & Society, and I just want to express my profound thanks here to John and his team at Harvard for doing a great job herding cats and overseeing a very challenging process. I encourage everyone to examine the full ISTTF report and all the submissions, presentations, and academic literature that we collected. [It’s all here.] It was a comprehensive undertaking that left no stone unturned.

Importantly, the ISTTF convened (1) a Research Advisory Board (RAB),which brought together some of the best and brightest academic researchers in the field of child safety and child development and (2) a Technical Advisory Board (TAB), which included some of America’s leading technologists, who reviewed child safety technologies submitted to the ISTTF. I strongly recommend you closely examine the RAB literature review and TAB assessment of technologies because those reports provide very detailed assessments of the issues. They both represent amazing achievements in their respective arenas.

There are a couple of key takeaways from the ISTTF’s research and final 278-page report that I want to highlight here. Most importantly, like past blue-ribbon commissions that have studied this issue, the ISTTF has generally concluded there is no silver-bullet technical solution to online child safety concerns. The better way forward is a “layered approach” to online child protection. Here’s how we put it on page 6 of the final report:

The Task Force remains optimistic about the development of technologies to enhance protections for minors online and to support institutions and individuals involved in protecting minors, but cautions against overreliance on technology in isolation or on a single technological approach. Technology can play a helpful role, but there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors. Instead, a combination of technologies, in concert with parental oversight, education, social services, law enforcement, and sound policies by social network sites and service providers may assist in addressing specific problems that minors face online. All stakeholders must continue to work in a cooperative and collaborative manner, sharing information and ideas to achieve the common goal of making the Internet as safe as possible for minors.

Continue reading →

Genachowski for the FCC

by on January 13, 2009 · 9 comments

President-elect Obama intends to appoint Julius Genachowski, a protege of former FCC chairman Reed Hundt, as the commission’s next chairman.

Having been at the FCC with Hundt, Genachowski should have seen industries largely ignored by the commission — cable and wireless — thrive as a result of deregulation while the telephone industry it attempted to reinvent soon crashed.

As George Gilder and I noted in a paper this past summer, when the 1996 law passed, there were several cable operators who planned to offer competitive phone services in a venture that included Sprint Corp. These plans were shelved, according to Sprint CEO William T. Esrey, due the FCC‘s “pro-competition” policies: “If we provided telephony service over cable, we recognized that they would have to make it available to competitors.” Thus, the local competition rules which were intended to speed effective competition actually delayed it. Cable voice services did not gain significant momentum until 2004, when the FCC scaled back its pro-competition rules. Those changes prompted phone companies to enter the video market dominated by cable operators, who in turn accelerated their entry into the voice market dominated by incumbent phone companies.

Genachowski should know that in its pure form net neutrality regulation would encumber broadband networks with the same open access regulation which failed when applied to local telephone networks.

I’ve been working closely with PFF’s new Adjunct Fellow Michael Palage on ICANN issues.  Here is his latest note, from the PFF blog.

ICANN recently proclaimed that the “Joint Project Agreement” (one of two contractual arrangements that ICANN has with the U.S. Department of Commerce (DoC) governing ICANN’s operations) will come to an end in September 2009. ICANN’s insistence on this point first became clear back in October 2008 at ICANN’s Washington, D.C. public forum on Improving Institutional Confidence when Peter Dengate Thrush, Chair of ICANN’s Board declared:

the Joint Project Agreement will conclude in September 2009. This is a legal fact, the date of expiry of the agreement. It’s not that anyone’s declared it or cancelled it; it was set up to expire in September 2009.

ICANN’s recently published 2008 Annual Report stuck to this theme:

“As we approach the conclusion of the Joint Project Agreement between the United States Department of Commerce and ICANN in September 2009…” – His Excellency Dr. Tarek Kamel, Minister of Communications and Information Technology, Arab Republic of Egypt

“Concluding the JPA in September 2009 is the next logical step in transition of the DNS to private sector management.” – ICANN Staff

“This consultation’s aim was for the community to discuss possible changes to ICANN in the lead-up to the completion of the JPA in September 2009.” – ICANN Staff

ICANN’s effort to make the termination of the JPA seem inevitable is concerning on two fronts. First, ICANN fails to mention that the current JPA appears to be merely an extension/revision of the original 1998 Memorandum of Understand (MoU) with DoC, which was set to expire in September 2000. Thus, because the JPA does not appear to be a free-standing agreement, but merely a continuation of MOU-as Bret Fausset argues in his excellent analysis of the relationship between the MoU and the JPA (also discussed by Milton Mueller). Therefore, it would be more correct to talk about whether the “MoU/JPA”-meaning the entire agreement as modified by the most current JPA-will expire or be extended. Continue reading →

NozickI haven’t been blogging much lately because, along with my PFF colleagues Berin Szoka and Adam Marcus, I’m working on a lengthy paper about the importance of Section 230 to Internet freedom. Section 230 is the sometimes-forgotten portion of the Communications Decency Act of 1996 that shielded Internet Service Providers (ISP) from liability for information posted or published on their systems by users or other third parties. It was enshrined into law with the passage of the historic Telecommunications Act of 1996. Importantly, even though the provisions of the CDA seeking to regulate “indecent” speech on the Internet were struck down as unconstitutional, Sec. 230 was left untouched.

Section 230 of the CDA may be the most important and lasting legacy of the Telecom Act and it is indisputable that it has been remarkably important to the development of the Internet and online free speech and expression in particular. In many ways, Section 230 is the cornerstone of “Internet freedom” in its truest and best sense of the term.

In recent years, however, Sec. 230 has come under fire from some academics, judges, and other lawmakers. Critics raise a variety of complaints — all of which we will be cataloging and addressing in our forthcoming PFF paper. But what unifies most of the criticisms of Sec. 230 is the belief that Internet “middlemen” (which increasingly includes almost any online intermediary, from ISPs, to social networking sites, to search engines, to blogs) should do more to police their networks for potentially “objectionable” or “offensive” content. That could include many things, of course: cyberbullying, online defamation, harassment, privacy concerns, pornography, etc. If the online intermediaries failed to engage in that increased policing role, they would open themselves up to lawsuits and increased liability for the actions of their users.

The common response to such criticisms — and it remains a very good one — is that the alternative approach of strict secondary liability on ISPs and other online intermediaries would have a profound “chilling effect” on online free speech and expression. Indeed, we should not lose sight of what Section 230 has already done to create vibrant, diverse online communities. Brian Holland, a visiting professor at Penn State University’s Dickinson School of Law, has written a brilliant paper that does a wonderful job of doing just that. It’s entitled “In Defense of Online Intermediary Immunity: Facilitating Communities of Modified Exceptionalism” and it can be found on SSRN here. I cannot recommend it highly enough. It is a masterpiece.
Continue reading →

Apple has announced it will be dropping DRM, completing the transition from its DRM-Free-For-a-Fee model to one where DRM music isn’t an option. As Ars reports, it’ll take until August to see all DRM’d content leave the iTunes store.

This seems to be the final stage in a trasition that started in February of 2007.  That’s when Steve Jobs wrote his now famous “Thoughts on Music” memo.  Since then we’ve seen Amazon.com open it’s DRM free store and, more recently, the RIAA change its tactics and declare its war on downloaders over.  It seems that the music industry is slowly realizing how it must adapt to life in a digital world.

While music is learning its lesson, Hollywood seems to be willfully ignorant.  The major studios remain staunchly pro-DRM and continue to fight even those activities that should be perfectly legal.

Viacom, Sony, Fox, Universal, Disney, and Warner Bros. law suit against RealNetworks is the latest example of Hollywood’s refusal to adapt.  The studios are up in arms over RealDVD—software that allows consumers to copy DVDs to their personal computers.  But RealNetworks CEO Chief Executive Rob Glaser seems determine to fight the Hollywood giants.

Continue reading →