Related to last month’s series on platform monopolies, here’s an interesting article on Microsoft’s baby steps toward opening up the Xbox:
With the hobbyist release, the software giant is hoping to lay the groundwork for what one day will be a thriving network of enthusiasts developing for one another, something akin to a YouTube for games. The company, however, is pretty far from that goal.
In the first incarnation, games developed using the free tools will be available only to like-minded hobbyists, not the Xbox community as a whole. Those who want to develop games will have to pay a $99 fee to be part of a “Creators’ Club,” a name that is likely to change. Games developed using XNA Game Studio Express will be playable only by others who are part of the club.
Next spring, Microsoft hopes to have a broader set of tools that will allow for games to be created that can then be sold online through Microsoft’s Xbox Live Arcade. Microsoft will still control which games get published, and it’ll get a cut of the revenue.
Down the road, probably three to five years from now, Microsoft hopes to have an open approach, where anyone can publish games, and community response helps separate the hits from the flops.
Just so we’re clear, the obstacles to an open Xbox are legal and financial, not technical. If Microsoft’s goal were simply to make Xbox development tools more widely available, they could do that in a matter of months, just as the PC platform is open to development by anybody. What Microsoft wants to do is open up Xbox development to a wider audience of gamers without relinquishing their monopoly on access to the platform for developers. That way they can be sure to get a cut on each game released.
Continue reading →
This post (and the previous one) is being posted from a 6-year-old iMac running the Ubuntu distribution of Linux.
The Ubuntu story is fascinating. It was created by Mark Shuttleworth, a 30-something South African entrepreneur who made his fortune in the 1990s, flew to space in 2002, and then decided to knock Microsoft off its perch as the world’s leading desktop OS. (AKA fixing bug #1) In just two years, Ubuntu has become widely recognized as the desktop version of Linux to beat.
Of course, a big part of its success is the $10 million a year he’s reportedly sinking into Ubuntu’s parent company, canonical. Still, there’s no way you could build a full-featured OS from scratch in two years with $20 million–to say nothing of a desktop OS with hundreds of applications and support for multiple architectures. Ubuntu is a very thin layer of commercially-developed (but free-as-in-speech) enhancements to off-the-shelf free software. Most importantly, Ubuntu is built atop Debian, a Linux distribution that focuses on stability and using exclusively free software.
Below the cut I’ll give some of my initial impressions of the OS, which necessarily will be a little bit more technical than the usual TLF fare. I’ll consider some of the economic implications of Ubuntu in a future post.
Continue reading →
On Tuesday, government officials in India rejected an offer to participate in a much-hyped project to distribute laptops costing US$100 each to the world’s impoverished children. A closer look reveals this scheme to be little more than open source evangelism in the Third World.
The laptop project is part of the One Laptop per Child initiative, an ambitious nonprofit effort endorsed by the United Nations to “revolutionize” education by providing every child on the planet with access to a computer. OLPC backers assume there is a universal need for every child to have a laptop, which they view as the gateway to a rosy future.
Read more here.
I recently discovered Jane Jacobs’s legendary book, The Death and Life of Great American Cities. I think there are a lot of interesting parallels between the arguments she makes about urban planning and the issues we argue about in technology policy. (one example that I’ll save for another post: her prescriptions for healthy cities are almost all oriented toward increasing the power of network effects generated by peoples’ proximity to one another. My previous post about non-money-mediated markets reminded me of the chapter in Jacobs’s book about congestion.
Jacobs lived in New York when she wrote the book in 1961, and she described a political battle over whether to widen a road that ran through a large park in the city. The city planners, citing congestion problems, wanted to widen it and turn it into an expressway. Jacobs, in contrast, says that local activists, after beating back this proposal (which would have split the park in half and generally made life miserable for the locals) succeeded in getting the road closed entirely. The officials predicted mayhem in adjacent streets, as all of those extra cars re-routed into the side streets. Yet, according to Jacobs, nothing of the sort occurred. If anything, traffic on nearby streets actually declined.
Continue reading →
Last year I linked to this fantastic article by Clay Shirky on the reasons micropayments never took off. Shirky wrote:
A transaction can’t be worth so much as to require a decision but worth so little that that decision is automatic. There is a certain amount of anxiety involved in any decision to buy, no matter how small, and it derives not from the interface used or the time required, but from the very act of deciding.
Micropayments, like all payments, require a comparison: “Is this much of X worth that much of Y?” There is a minimum mental transaction cost created by this fact that cannot be optimized away, because the only transaction a user will be willing to approve with no thought will be one that costs them nothing, which is no transaction at all.
Thus the anxiety of buying is a permanent feature of micropayment systems, since economic decisions are made on the margin – not, “Is a drink worth a dollar?” but, “Is the next drink worth the next dollar?” Anything that requires the user to approve a transaction creates this anxiety, no matter what the mechanism for deciding or paying is.
Shirky’s argument looks as solid today as it did six years ago. He pointed to three payment methods as alternatives: aggregation (bundle the business section with the sports section), subscription (take the paper every day), and subsidy (have advertisers pay for the paper). These have all clearly taken off–subsidy especially. Shirky followed that essay up with another in 2003:
Continue reading →
Over at the IPCentral blog, I had a lengthy and cordial discussion with Noel Le about the supposed contradiction between the rhetoric of open source and the support open source software receives from corporations. I started my first comment with “I don’t think I understand this critique,” and I’m still confused.
I won’t re-hash that debate, which you’re welcome to read for yourself if you’re interested, but I wanted to comment on the hostile attitude that some libertarian intellectuals seem to have toward open source software. Even libertarian luminaries like Richard Epstein have criticized open source software as “unsustainable,” and insinuated that they succeed only due to the largess of billion dollar software companies. Epstein seems to think that the open source movement is living on borrowed time, and once the folks subsidizing it (the government, tax-funded universities, IBM, the developers themselves, whoever) get tired of all the free riding, the party will come to a halt.
For anyone who’s actually used open source software, or who knows open source programmers, this critique doesn’t ring true. Most open source projects exist and thrive for years before corporations started taking notice of it, and only a small fraction of open source programmers are lucky enough to have employers who pay them to do it full time. Corporate support is obviously beneficial to open source efforts, but they would get along just fine without them.
Indeed, it seems to me that if you want to understand what drives open source software, the logical thing to do is to ask the people who are creating it. Their motivations haven’t exactly been a closely guarded secret. Open source programmers say they do what they do because they enjoy the intellectual challenge, because it helps them develop valuable skills, and because they enjoy impressing the community of their peers. So why don’t libertarians like Le and Epstein take them at their word?
Continue reading →
Nick Carr fundamentally misunderstands Yochai Benkler’s thesis about peer production and its relationship to markets and the firm:
One thing that becomes clear from the discussion [comparing Wikipedia to OSS] is how dangerous it is to use “open source” as a metaphor in describing other forms of participative production. Although common, the metaphor almost always ends up reducing the complex open-source model to a simplistic caricature.
The discussion also sheds light on a topic that I’ve been covering recently: Yochai Benkler’s contention that we are today seeing the emergence of sustainable large-scale production projects that don’t rely on either the pricing system or management structure. Benkler’s primary example is open source software. But panelist Siobhan O’Mahony’s description of the evolution of open source projects reveals that they have become increasingly interwoven with the pricing system and increasingly dependent on formal management structure.
Libertarians have long recognized that the firm and the market can be mixed and matched in complex ways. Firms obviously rely on markets to obtain raw materials and to sell their finished products. Markets are also organized by firms, as in the case of the New York Stock Exchange. Firms also sometimes use markets internally, as with Koch industries, which uses “market based management” to give individual divisions within their firm greater incentives to productivity. Moreover, as Coase described, firms constantly face decisions about which of its goals they should accomplish internally (i.e. using the methods of the firm) and which they should outsource (i.e. use the methods of the market). No one would claim that any significant industry could be run purely as “markets” or purely as “firms.”
Continue reading →
Mike Linksvayer makes a good point about my triumphalist post last week concerning AOL versus the Internet:
How hard is it to imagine a world in which closed services like AOL remain competitive, or even dominant, leaving the open web to hobbyists and researchers?
One or two copyright-related alternative outcomes could have put open networks at a serious disadvantage.
First, it could have been decided that indexing the web (which requires making and storing copies of content) requires explicit permission. This may have stunted web search, which is critical for using the open web. Many sites would not have granted permission to index if explicit permission were required. Their lawyers would have advised them to not give away valuable intellectual property. A search engine may have had to negotiate deals with hundreds, then thousands (I doubt in this scenario there would ever be millions) of websites, constituting a huge barrier to entry. Google? Never happened. You’re stuck with Lycos.
Second, linking policies could have been held to legally constrain linking, or worse, linking could have been held to require explicit permission. Metcalfe’s law? Never mentioned in the context of the (stunted) web.
He’s certainly right that these things would have been bad, and that bad intellectual property policies in the last decade have probably stunted technological progress in ways analogous to the alternative history he lays out above. But I actually don’t think it’s a coincidence that the web played out the way it did. The Internet’s open architecture was a consequence of having incubated in an open culture. The men (and a few women) who built the Internet did so with the conscious intention of building a decentralized, open network. Along with the TCP/IP protocol stack, HTML, HTTP, SMTP, etc, they also developed a set of norms that emphasized openness and collaboration. The result was that when newcomers like Netscape and Microsoft came on the scene, their efforts to transform the Web into a private fiefdom met with fierce resistance from Internet old-timers.
Continue reading →
To wrap this series up, let me recap the reasons that platform monopolies are a bad idea:
Advocates of platform monopoly rights argue that such rights increase the profitability of new platform creation, thereby encouraging more R&D spending and innovation.
Technological systems are subject to “gains to interoperability,” analogous to the gains from trade.
Firms have an incentive to engage in “platform protectionism,” which reduces the surplus created by network effects but can increase the share of that surplus captured by the firm.
Often, very little of the value of a platform can be explained by the design decisions of the firm that created it.
A platform owner will tend to under-license its platform due to the inability of intermediaries to capture the full value created by the platform. This problem gets worse as the number of intermediaries increases, inefficiently “flattening” the development structure.
Continue reading →
I got a first-hand lesson in the pace of technological change when I was writing my paper on the DMCA. I wrote the first draft last July, went through Cato’s editing process from October to January, and then the paper was released in March.
Between the initial writing of the paper and its publication 9 months later, the online video marketplace underwent a revolution. When I was writing last July, I described the market for streaming video as stagnant, dominated by three mutually incompatible, vertically integrated video platforms controlled by Apple, Microsoft, and Real, respectively. By press time, YouTube had burst onto the scene.
The remarkable thing about YouTube is that from a technical perspective, there’s nothing especially remarkable about it. Instead of inventing their own video streaming format, as Apple, Microsoft, and Real had done, they built a video player atop Adobe’s popular Flash technology. Flash was originally designed as a platform for lightweight web animations, but it has evolved into a full-featured application platform. Luckily for YouTube, Flash was already installed on the vast majority of PCs and Macs, meaning that YouTube didn’t have to go to the trouble of getting its software installed on millions of PCs.
Here, I would argue, is an example of a case where we had at too many video platforms that tried to do too much. Had Microsoft, Apple, and Real been able to agree on a unified video format, and allowed other third parties to develop for that format, it’s likely that streaming video would have taken off much earlier. But because each company was more focused on making sure its format prevailed than on ensuring consumers had the best possible video experience, the streaming video market stayed in a kind of stalemate for close to a decade.
Continue reading →