Boxee vs. the DMCA

by on January 18, 2009 · 18 comments

I was very interested to read Berin’s post about the Boxee, a device I had not heard about until today. I’ve been asking for years why there are no good video jukebox products on the market, so I’m always interested to see new entrants in the market.

If Wikipedia is to be believed, Boxee is a fork of the XBMC Media Center, which I first wrote about way back in 2006. The reason you may not have heard more about the XBMC Media Center is that it sits in an uncomfortable legal grey area. Thanks to the DMCA, one of its most inportant features—the ability to play and rip DVDs—is illegal. And there are probably other DMCA- and software-patent-related legal impediments to releasing a product like the XBMC. As a consequence, the major consumer electronics manufacturers have released relatively crippled set-top boxes that have not caught on with consumers.

Boxee’s wikipedia page suggests that Boxee uses libdvdcss, a cousin of the DeCSS library that the courts ruled to be an illegal “circumvention device” back in 2001. And the DMCA holds that someone who “trafficks” in a circumvention device “willfully and for purposes of commercial advantage or private financial gain” should be fined up to $500,000 and imprisoned for up to 5 years.

Now, the NYT article says that “Lawyers say that Boxee does not appear to be doing anything illegal,” although it doesn’t quote any actual lawyers, nor does it say which legal issues those lawyers examined. It’s possible that Boxee stripped out libdvdcss and replaced it with code that has been approved by the DVD founders. Moreover, it seems that Boxee’s strategy is to just build cool technologies and let the legal chips fall where they may:

Mr. Ronen said that like many start-ups, Boxee was definitely leaping without looking. “Don’t assume we have lawyers. That’s expensive,” he said.

This is a very risky strategy, both from a business perspective and for Ronen personally. But it’s also likely to pay off. If Ronen is able to get enough customers before the MPAA can be roused into taking legal action, they have a pretty good shot at winning the resulting PR war and forcing the MPAA to back down, even if the MPAA has the law on its side. And indeed, that may be the only way to break into this market, because if he plays by the rules he’ll never get the studios’ permission to build a set-top box the studios don’t control.

Fortunately, courts tend to be swayed by the perceived “legitimacy” of a technology’s designers. Remember, for example, that just 7 years after suing to keep MP3 players off the market, the recording industry insisted to the Supreme Court that everyone knew MP3 players were legal. There weren’t any changes to the law in the interim. Rather, MP3 players had become a familiar technology and so judges intuitively “knew” that any interpretation of the law that ruled out MP3 players must be wrong. If Boxee can grow fast enough, and can cultivate a “good citizen” image, it may be able to pursuade judges that any interpretation of the DMCA that precludes Boxee must be wrong.

The more fundamental point, of course, is that it’s ridiculous that Ronen has to worry about these legal issues in the first place. The copy protection technologies Ronen is circumventing haven’t stopped piracy, they’ve simply given Hollywood a legal club with which to bludgeon technology companies it doesn’t like. Had the DMCA not been on the books, we likely would have seen a proliferation of XBMC-like device and software on the market several years ago.

This ongoing series has explored the increasing ability of consumers to “cut the cord” to traditional video distributors (cable, satellite, etc.) and instead receive a mix of “television” programming and other forms of video programming over the Internet.  As I’ve argued, this change not only means lower monthly bills for those “early adopter” consumers who actually do “cut the cord”, but, in the coming years, a total revolution in the traditional system of content creation and distribution on which the FCC’s existing media regulatory regime is premised.   

This revolution has two key parts:

  1. Conduits: The growing inventory—and  popularity—of sites such as Hulu, Amazon Unboxed and the XBox 360 Marketplace (or software such as Apple’s iTunes store), that allow users to view or download video content.  Drawing an analogy to the FCC’s term “Multichannel Video Programming Distibutor” or MVPD (cable, direct broadcast satellite, telco fiber, etc.), I’ve dubbed these sites “Internet Video Programming Distributors” or IVPDs.
  2. Interface:  The hardware and software that allows users to display that content easily on a device of their choice, especially their home televisions.

While much of the conversation about “interface” has focused on special hardware that brings IVPD content to televisions through set-top boxes such as the Roku box or game consoles like the XBox 360, at least one company is making waves with a software solution.  From the NYT:

Boxee bills its software as a simple way to access multiple Internet video and music sites, and to bring them to a large monitor or television that one might be watching from a sofa across the room. Some of Boxee’s fans also think it is much more: a way to euthanize that costly $100-a-month cable or satellite connection. “Boxee has allowed me to replace cable with no remorse,” said Jef Holbrook, a 27-year-old actor in Columbus, Ga., who recently downloaded the Boxee software to the $600 Mac Mini he has connected to his television. “Most people my age would like to just pay for the channels they want, but cable refuses to give us that option. Services like Boxee, that allow users choice, are the future of television.” …. Boxee gives users a single interface to access all the photos, video and music on their hard drives, along with a wide range of television shows, movies and songs from sites like Hulu,NetflixYouTubeCNN.com and CBS.com.

Continue reading →

Jerry Yang’s departure as Yahoo! CEO opens the door to a renewed bid by Microsoft to buy Yahoo!’s search business (or Yahoo! itself).  Such a merger could produce a significantly stronger challenger to Google in the search market.  With this possibility in mind, the WSJ just ran a fascinating history of the “paid search” The search marketbusiness—the placement of “contextually targeted” ads next to search engine results based on the search terms that produced those results.

In a nutshell, Microsoft failed to see (back in 1998-2003) the enormous potential of paid search—just as small start-ups (such as Google) were starting to develop the technology and business model that today account for a $12+ billion/year industry, which is  twice the size of the display ad market and which supports a great deal of the online content and services we have all come to take for granted online.  Microsoft first put its toe in the water of paid search with a small-scale partnership with Goto.com in 1999-2000.  But this partnership failed because of internal resistance from the managers of Microsoft’s display-ad program.  In 2000, Google launched Adwords and thus began its transformation from start-up into economic colossus.  By 2002, Microsoft realized that it needed to catchup fast, and approached Goto.com (by then renamed Overture) about a takeover.  But Microsoft ultimately chose in 2003 not to buy the startup because  Bill Gates and Steve Ballmer “balked at Overture’s valuation of $1 billion to $2 billion, arguing that Microsoft could create the same service for less.” 

Microsoft, meanwhile, spent the next 18 months deploying hundreds of programmers to build a search engine and a search-ad service, which it code-named Moonshot. The company launched its search engine in late 2004 and its search-ad system in May 2006.

But Microsoft’s ad system came too late:

Advertisers applauded Moonshot for its technical innovation. But Microsoft had trouble coaxing people to migrate to its search engine from Google; advertisers were unwilling to spend large sums on MSN’s search ads. By building a new system instead of buying Overture, Mr. Mehdi says, “we really delayed our time to market.”

What’s most fascinating about the piece is that it seems to suggest that Microsoft missed its opportunities to get into paid search not because it was “dumb,” “uninnovative” or a “bad” company, but for the same sorts of reasons that big, highly successful and even particularly innovative companies fail.  The reasons companies generally succeed in mastering “adaptive” innovation of the technologies behind their established business models are the very reasons why such great companies struggle to encourage or channel the “disruptive” innovation that renders their core technologies and business models obsolete.   Continue reading →

As Berin and I have noted here before (here and here), there seems to be no shortage of competition and innovation in the mobile operating system (OS) space. We’ve got:

  1. Apple’s iPhone platform,
  2. Microsoft’s Windows Mobile,
  3. Symbian,
  4. Google’s Android,
  5. BlackBerry,
  6. Palm OS (+ Palm’s new WebOS),
  7. the LiMo platform, and
  8. OpenMoko.

I am missing any? I don’t think so. Even if I have, this is really an astonishing degree of platform competition for a network-based industry. Network industries are typically characterized by platform consolidation over time as both application developers and consumers flock to just a couple of standards — and sometimes just one — while others gradually fade away. But that has not yet been the case for mobile operating systems.  I just can’t see it lasting, however. As I argued in my essay on “Too Much Platform Competition?,” I would think that many application providers would be clamoring for consolidation to make it easier to develop and roll out new services.  Some are, and yet we still have more than a half-dozen mobile OS platforms on the market.

Regardless, the currently level of platform competition also seems to run counter to the thesis set forth by Jonathan Zittrain and others who fear the impending decline or death of digital “generativity.” That is, technologies or networks that invite or allow tinkering and all sorts of creative uses are supposedly “dying” or on the decline because companies are trying to exert more control over proprietary or closed systems. Continue reading →

In early December, Jerry Brito asked whether Obama’s proposal to create the post of  Chief Technology Officer (CTO) should be feared or welcomed:

I think the question turns on whether this person will be CTO of the United States or CTO of the U.S. Federal Government. While I personally believe the former should be feared, the latter should be welcomed.

I agree completely—and it now seems that this is in fact where the incoming Administration is heading.  BusinessWeek reports that the Obama Administration has narrowed its choices down to two Indian-American CTOs:

  • Vivek Kundra, D.C.’s CTO
  • Padmasree Warrior, Cisco’s CTO

Judging by BusinessWeek’s short descriptions, both candidates sound terrifically well-qualified to lead implementation of Obama’s oft-repeated promises to bring the United States government into the Web 2.0 era.  More importantly, the fact that the two likely candidates are CTOs—rather than, say, advocates of any particular technology policy agenda—strongly suggests that the Obama administration isn’t contemplating giving the CTO authority to set technology policy outside the Federal government.  

Whomever Obama chooses in the end will have his or her work cut out for them.  While free marketeers may indeed have much to fear from Obama’s technology policy agenda in terms of over-regulation, increased government control and market-distorting subsidies, e-government is one area where we ought to be able to cheer the new President on:   The Federal government could be made much more transparent and democratically accountable if Federal agencies simply adopted some of the tools users take for granted on private websites-such as RSS feeds and standardized data. 

Let’s just hope that Obama makes it very clear in creating the CTO post that its responsibilities are indeed strictly limited directing adoption of information technology inside the Federal government, so that the position doesn’t mushroom into the more powerful “Technology Czar” some rightly fear.

Gamepolitics.com reports on a new South Carolina bill that proposes to outlaw public profanity. The measure, S. 56, stipulates that:

It is unlawful for a person in a public forum or place of public accommodation wilfully and knowingly to publish orally or in writing, exhibit, or otherwise make available material containing words, language, or actions of a profane, vulgar, lewd, lascivious, or indecent nature. A person who violates the provisions of this section is guilty of a felony and, upon conviction, must be fined not more than five thousand dollars or imprisoned not more than five years, or both.

Let’s ignore the free speech issues in play here — although they are numerous — and ask the 3 questions I increasingly put front and center in everything I write about modern censorship efforts: (1) How do they plan to enforce it? (2) Is there really any chance of such a law being even remotely successful? (3) How onerous or intrusive will it be to attempt to do so?

We can imagine a few examples of where such a law could create serious challenges. For example, how would law enforcement officials deal with public swearing at ball games and other sporting events? And not just in the crowd but from the players. Have you ever heard Tiger Woods after he misses a close putt? Wow. He might want to avoid the next tournament in South Carolina!  But how about profanities uttered in other public places, like hospitals during painful procedures? Man, you should have heard the profanities my wife was letting loose when our two kids were arriving in this world. Would have made George Carlin blush!  And how about the halls of Congress? Oh my, now there is an education in sailor talk, although perhaps less so with VP Cheney departing. I’m sure that more than a few choice profanities have been tossed about down in the South Carolina statehouse at times. And there are lots of other cases where enforcement would be challenging: bars, concerts, comedy shows, political protests, etc.

Bottom line: We are talking about a lot of fines and jail time down in the Palmetto State!

Look, I’m as uncomfortable with excessive public profanity as anyone else when my kids are around. But I talk to them about it because I know it will never go away.When we are surrounded by foul-mouthed hooligans at ballgames, I frequently remind my kids that it’s not appropriate to say such things and that only “stupid people use stupid words.” Of course, Dad has been known to use his share of “stupid words” at home at times, too.  My daughter recently threatened to “tell Mommy” on me when I hit my thumb with a hammer and let a choice word fly. Hey, it happens. It’s not the end of the world. We can try to change our habits and teach our kids better manners. But I doubt laws like the one South Carolina is considering will really make much of a difference.

It has been suggested that the American wireless market is a “textbook oligopoly” in which the four national carriers have little incentive to innovate or further reduce prices. I’m more sympathetic to this argument than some libertarians, but over at Techdirt Carlo offers some evidence that competition is alive and well in the wireless marketplace. For a while, the national carriers have offered unlimited voice and text messaging services for around $100/month. Carlo notes that a couple of regional carriers that focus on the low end of the market and have less comprehensive coverage maps have started offering unlimited connectivity for as little as $50/month. The latest development is that Sprint’s Boost Mobile unit is joining the $50/month flat rate club.

Jim Harper has made the point in the wired broadband market, but it deserves to be made here too: competition happens along multiple dimensions. Consumers have different trade-offs between price and quality, and so products with different feature sets and price points often compete directly with one another. There may be only four national carriers, and the regional carriers may not be able to offer service that the typical consumer finds comparable to the offerings of the national networks, but that doesn’t mean the regional carriers are irrelevant. Offering a bargain option at the low end of the market really does put pressure on the margins of the tiers above them. As long as there are some AT&T and Verizon customers who would be willing to put up with spotty coverage in exchange for a lower phone bill, AT&T and Verizon will have an incentive to cut their prices over time.

Of course, we could use more wireless competition. But we also shouldn’t lose sight of how much good the spectrum that’s already been auctioned off has done. It’s hard to create competitive telecom markets. For all of its flaws, the mobile industry is a real success story. And the solution to the flaws is to continue what we started 15 years ago: auctioning off more spectrum and creating real property rights in the airwaves.

ISTTF coverThe Internet Safety Technical Task Force (ISTTF), which was formed a year ago to study online safety concerns and technologies, today issued its final report to the U.S. Attorneys General who authorized its creation. It was a great honor for me to serve as a member of the ISTTF and I believe this Task Force and its report represent a major step forward in the discussion about online child safety in this country.

The ISTTF was very ably chaired by John Palfrey, co-director of Harvard University’s Berkman Center for Internet & Society, and I just want to express my profound thanks here to John and his team at Harvard for doing a great job herding cats and overseeing a very challenging process. I encourage everyone to examine the full ISTTF report and all the submissions, presentations, and academic literature that we collected. [It’s all here.] It was a comprehensive undertaking that left no stone unturned.

Importantly, the ISTTF convened (1) a Research Advisory Board (RAB),which brought together some of the best and brightest academic researchers in the field of child safety and child development and (2) a Technical Advisory Board (TAB), which included some of America’s leading technologists, who reviewed child safety technologies submitted to the ISTTF. I strongly recommend you closely examine the RAB literature review and TAB assessment of technologies because those reports provide very detailed assessments of the issues. They both represent amazing achievements in their respective arenas.

There are a couple of key takeaways from the ISTTF’s research and final 278-page report that I want to highlight here. Most importantly, like past blue-ribbon commissions that have studied this issue, the ISTTF has generally concluded there is no silver-bullet technical solution to online child safety concerns. The better way forward is a “layered approach” to online child protection. Here’s how we put it on page 6 of the final report:

The Task Force remains optimistic about the development of technologies to enhance protections for minors online and to support institutions and individuals involved in protecting minors, but cautions against overreliance on technology in isolation or on a single technological approach. Technology can play a helpful role, but there is no one technological solution or specific combination of technological solutions to the problem of online safety for minors. Instead, a combination of technologies, in concert with parental oversight, education, social services, law enforcement, and sound policies by social network sites and service providers may assist in addressing specific problems that minors face online. All stakeholders must continue to work in a cooperative and collaborative manner, sharing information and ideas to achieve the common goal of making the Internet as safe as possible for minors.

Continue reading →

Genachowski for the FCC

by on January 13, 2009 · 9 comments

President-elect Obama intends to appoint Julius Genachowski, a protege of former FCC chairman Reed Hundt, as the commission’s next chairman.

Having been at the FCC with Hundt, Genachowski should have seen industries largely ignored by the commission — cable and wireless — thrive as a result of deregulation while the telephone industry it attempted to reinvent soon crashed.

As George Gilder and I noted in a paper this past summer, when the 1996 law passed, there were several cable operators who planned to offer competitive phone services in a venture that included Sprint Corp. These plans were shelved, according to Sprint CEO William T. Esrey, due the FCC‘s “pro-competition” policies: “If we provided telephony service over cable, we recognized that they would have to make it available to competitors.” Thus, the local competition rules which were intended to speed effective competition actually delayed it. Cable voice services did not gain significant momentum until 2004, when the FCC scaled back its pro-competition rules. Those changes prompted phone companies to enter the video market dominated by cable operators, who in turn accelerated their entry into the voice market dominated by incumbent phone companies.

Genachowski should know that in its pure form net neutrality regulation would encumber broadband networks with the same open access regulation which failed when applied to local telephone networks.

I’ve been working closely with PFF’s new Adjunct Fellow Michael Palage on ICANN issues.  Here is his latest note , from the PFF blog.

ICANN recently proclaimed that the “Joint Project Agreement” (one of two contractual arrangements that ICANN has with the U.S. Department of Commerce (DoC) governing ICANN’s operations) will come to an end in September 2009. ICANN’s insistence on this point first became clear back in October 2008 at ICANN’s Washington, D.C. public forum on Improving Institutional Confidence when Peter Dengate Thrush, Chair of ICANN’s Board declared:

the Joint Project Agreement will conclude in September 2009. This is a legal fact, the date of expiry of the agreement. It’s not that anyone’s declared it or cancelled it; it was set up to expire in September 2009.

ICANN’s recently published 2008 Annual Report stuck to this theme:

“As we approach the conclusion of the Joint Project Agreement between the United States Department of Commerce and ICANN in September 2009…” – His Excellency Dr. Tarek Kamel, Minister of Communications and Information Technology, Arab Republic of Egypt
“Concluding the JPA in September 2009 is the next logical step in transition of the DNS to private sector management.” – ICANN Staff
“This consultation’s aim was for the community to discuss possible changes to ICANN in the lead-up to the completion of the JPA in September 2009.” – ICANN Staff

ICANN’s effort to make the termination of the JPA seem inevitable is concerning on two fronts. First, ICANN fails to mention that the current JPA appears to be merely an extension/revision of the original 1998 Memorandum of Understand (MoU) with DoC, which was set to expire in September 2000. Thus, because the JPA does not appear to be a free-standing agreement, but merely a continuation of MOU-as Bret Fausset argues in his excellent analysis of the relationship between the MoU and the JPA (also discussed by Milton Mueller). Therefore, it would be more correct to talk about whether the “MoU/JPA”-meaning the entire agreement as modified by the most current JPA-will expire or be extended. Continue reading →