What is “Optimal Interoperability”? A Review of Palfrey & Gasser’s “Interop”

by on June 11, 2012 · 4 comments

I’m pretty rough on all the Internet and info-tech policy books that I review. There are two reasons for that. First, the vast majority of tech policy books being written today should never have been books in the first place. Most of them would have worked just fine as long-form (magazine-length) essays. Too many authors stretch a promising thesis into a long-winded, highly repetitive narrative just to say they’ve written an entire book about a subject. Second, many info-tech policy books are poorly written or poorly argued. I’m not going to name names, but I am frequently unimpressed by the quality of many books being published today about digital technology and online policy issues.

The books of Harvard University cyberlaw scholars John Palfrey and Urs Gasser offer a welcome break from this mold. Their recent books, Born Digital: Understanding the First Generation of Digital Natives, and Interop: The Promise and Perils of Highly Interconnected Systems, are engaging and extremely well-written books that deserve to be books. There’s no wasted space or mindless filler. It’s all substantive and it’s all interesting. I encourage aspiring tech policy authors to examine their works for a model of how a book should be done.

In a 2008 review, I heaped praise on Born Digital and declared that this “fine early history of this generation serves as a starting point for any conversation about how to mentor the children of the Web.” I still recommend highly to others today. I’m going to be a bit more critical of their new book, Interop, but I assure you that it is a text you absolutely must have on your shelf if you follow digital policy debates. It’s a supremely balanced treatment of a complicated and sometimes quite contentious set of information policy issues.

In the end, however, I am concerned about the open-ended nature of the standard that Palfrey and Gasser develop to determine when government should intervene to manage or mandate interoperability between or among information systems. I’ll push back against their amorphous theory of “optimal interoperability” and offer an alternative framework that suggests patience, humility, and openness to ongoing marketplace experimentation as the primary public policy virtues that lawmakers should instead embrace.

Interop is Important, but Often Difficult & Filled with Trade-Offs

Palfrey and Gasser begin by noting that “there is no single, agreed-upon definition of interoperability” and that “there are even many views about what interop is and how it should be achieved” (p. 5). They set out to change that by developing “a normative theory identifying what we want out of all this interconnectivity” that the information age has brought us (p. 3).

Generally speaking, Palfrey and Gasser believe increased interoperability — especially among information networks and systems — is a good thing because it “provides consumers greater choice and autonomy” (p. 57), “is generally good for competition and innovation” (p. 90), and “can lead to systemic efficiencies” (p. 129).

But they wisely acknowledge that there are trade-offs, too, noting that “this growing level of interconnectedness comes at an increasingly high price” (p. 2). Whether we are talking about privacy, security, consumer choice, the state of competition, or anything else, Palfrey and Gasser argue that “the problems of too much interconnectivity present enormous challenges both for organizations and for society at large” (p. 2). Their chapter and privacy and security offers many examples, but one need only look around at their own digital existence to realize the truth of this paradox. The more interconnected our information systems become, and the more intertwined our social and economic lives become with those systems, the greater the possibility of spam, viruses, data breaches, and various types of privacy or reputational problems. Interoperability giveth and it taketh away.

When Does “the Public Interest” Demand Interoperability Regulation?

So, how do we know when increased interoperability is good for us or society? How do we strike a reasonable balance? And, most controversially, when should government intervene to tip the balance in one direction or another?

Palfrey and Gasser return to these questions repeatedly throughout the book but admit that their answers will be dissatisfying since “there is no single form or optimal amount of interoperability that will suit every circumstance” (p. 76). Thus, “most of the specifics of how to bring interop about [must] be determined on a case-by-case basis (p. 17). They elaborate:

That can feel unsatisfying. But it is an essential truth: the most interesting interop problems relate to society’s most complex and most fundamental systems. Their answers are never simple to come by, nor are they easy to implement. This characteristic of interop theory is a feature, not a bug. … The price to be paid for striving for a universal principle at the level of theory is that such a theory is full of nuances when it comes to application and practice (p. 17-18).

Fair enough. Yet, Palfrey and Gasser also make it clear they want government(s) to play an active role in ensuring optimal interoperability. They say they favor “blended approaches that draw upon the comparative advantages of the private and public sector” (p. 161), but they argue that government should feel free to tip or nudge interoperability determinations in superior directions. “If deployed with skill,” they argue, “the law can play a central role in ensuring that we get as close as possible to optimal levels of interoperability in complex systems” (p. 88).

That phrase — “optimal level of interoperability” — pops up repeatedly throughout the book. So, too, does the phrase “the public interest.” Palfrey and Gasser argue that governments must look out for “the public interest” and “optimal interoperability” since “market forces do not automatically lead to appropriate standards or to the adoption of the best available technology” (p. 167). Here they introduce two additional amorphous values that complicate the debate: “appropriate standards” and “best available technology.”

The fundamental problem this “public interest” approach to interoperability regulation is that it is no better than the “I-know-it-when-I-see-it” standard we sometimes at work in the realm of speech regulation. It’s an empty vessel, and if it is the lodestar by which policymakers make determinations about the optimal level of interoperability, then it leaves markets, innovators, and consumers subject to the arbitrary whims of what a handful of politicians or regulators think constitutes “optimal interoperability,” “appropriate standards,” and “best available technology.”

On the Limits of Knowledge

Palfrey and Gasser’s framework feels more than just “unsatisfying” in this regard; it feels downright insufficient. That’s because it is missing a major variable: the extent to which state actors are able to adequately define those terms or accurately forecast the future needs of markets or citizen-consumers.

Surprisingly, Palfrey and Gasser don’t really spend much time discussing the specific remedies the state might impose to achieve optimal interoperability. I would have liked to have seen them develop a matrix of interop options and then outline the strengths and weaknesses of each. But even absent a more detailed discussion of possible regulatory remedies, I would have settled for more concrete answers to the following questions: Why are we to assume that regulators possess the requisite knowledge needed to know when it makes sense to foreclose ongoing marketplace experimentation? And why should we trust that, by substituting their own will for that of countless other actors in the information technology marketplace, we will be left better off?

The closest Palfrey and Gasser get to defining a firm standard for when and why such state intervention is warranted comes on page 173 when they are discussing the need for the state to establish sound reasons for intervention. They argue:

The objective should not be interoperability per se but, rather, one or more public policy goal to which interoperability can lead. The goals that usually make sense are innovation and competition, but other objectives might include consumer choice, ease of use of a technology or system, diversity, and so forth (p. 173).

This is a bit better, but it still doesn’t fully grapple with the cost side of the cost-benefit calculus for intervention. Palfrey and Gasser are willing to at least acknowledge some of those problems when they remark that “the state is rarely in a position to call a winner among competing technologies” (p. 174). Moreover,

Lawmakers need to keep in view the limits of their own effectiveness when it comes to accomplishing optimal levels of interoperability. Case studies of government intervention, especially where complex information technologies are involved, show that states tend to be ill suited to determine on their own what specific technology will be the best option for the future (p. 175)

Quite right! Yet, that insight does not seem to influence their calls elsewhere in the book for regulatory activism. That’s a shame since the admonition about policymakers recognizing the “limits of their own effectiveness” should be able to help us devise some limiting principles regarding the state’s role.

Toward an Alternative Theory: Experimental, Evolutionary Interoperability

Allow me to offer a different theory of optimal interoperability that flows from these previous insights. It’s based on a more dynamic view of markets and the central importance of experimentation in the face of uncertainty. Let me just go ahead and articulate the core principles of what I will refer to as  “experimental, evolutionary interoperability theory.” Then I’ll explain it in more detail

  • Experimental, evolutionary interoperability: The theory that ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.

Palfrey and Gasser would label this a “laissez-faire” theory of interoperability and oppose it since they believe “a pure laissez-faire approach to interop rarely works out well” (p. 160). But they are wrong, at least to the extent they include the sweeping modifier “rarely” to describe this model’s effectiveness. In reality, the vast majority of interoperability that occurs into today’s information economy happens in a completely natural, evolutionary fashion without any significant state intervention whatsoever. In countless small and big ways alike, interconnection and interoperability happens every day throughout society. Yes, it is true that interoperability often happens against the backdrop of a legal system that allows court action to enforce certain rights or address perceived harms, but I would not classify that as a significant direct state intervention to tip or nudge interconnection decisions in one direction or another. And when interoperability doesn’t happen naturally, there are often good reasons it doesn’t and, even if there aren’t, non-interop spawns beneficial marketplace reactions and innovations.

Experimental, evolutionary interoperability theory flows out of Schumpeterian competition theory and the related field of evolutionary economics, but it is also heavily influenced by public choice theory (which stresses the limitations of romanticized theories of politics, planning, and “public interest” regulation). This alternative theory begins by accepting the simple fact that, as Austrian economist F.A. Hayek taught us, “progress by its very nature cannot be planned.” The wiser man, Hayek noted, “is very much aware that we do not know all the answers and that he is not sure that the answers he has are certainly the right ones or even that we can find all the answers.”

Ongoing experimentation with varying business models and modalities of social and economic production allows us to see what consumer choice and trial and error experimentation yields naturally over time. Ongoing experiments with flexible, voluntary interop standards and negotiations also allows us to determine which technological standards seem to benefit consumers in the short-term while also encouraging innovators to leap-frog existing standards and platforms when they become locked-in for too long or seem sub-optimal.

In the short-term, it is entirely possible that such voluntary, evolutionary interop experiments “fail” in various ways. That is often a good thing. Failures are how individuals and a society learn to cope with change and devise systems and solutions to accommodate technological change. As Samuel Beckett once counseled: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.” Progress depends upon an embrace of this uncertainty and acceptance of a world of constant upheaval if we are to learn how to cope, adapt, and move forward.

In this model, technological innovation often springs from the quest for the prize of market power.  Palfrey and Gasser generally reject this Schumpeterian vision of dynamic competition, but they at least do a nice job of describing it:

firms may have a stronger incentive to be innovative when low levels of interoperability promise higher or even monopoly profits. This sort of competition… creates incentives for firms to come up with entirely new generations of technologies or business methods that are proprietary (p. 121).

They reject this approach based on (1) the mistaken notion that the quest of the prize of market power ends in the attainment and preservation of that market power; and (2) the belief that policymakers possess the ability to set us on a better course through wise interventions.

In a moment, I’ll prove why that is misguided by examining a few real-world cases studies. For now, however, let’s return to Palfrey & Gasser’s central operating principle and contrast it with the vision I’ve articulated here. Recall that they argue “it is important to maintain and facilitate diversity in the marketplace. We simply want systems to work together when we want them to and to not work together when we do not.” Again, there is no standard here if one is suggesting this as the principle by which to determine when state intervention is desirable. But if one is looking at that aspirational statement as a description of the natural order of things — namely, that we do indeed “want systems to work together when we want them to and to not work together when we do not” — then that is a perfectly sound principle for understanding why state intervention should be disfavored in all but the most extreme circumstances. To reiterate: We should not allow the state to foreclose interoperability experiments because (a) those experiments have value in and of themselves, and (b) state action is likely to have myriad unintended consequences and unforeseen costs that are not easily remedied or reversed.

There are moments in the book when Palfrey and Gasser appear somewhat sympathetic to the sort of alternative “evolutionary interop” theory I have articulated here. For example, they note that:

The web is a great equalizer for technology firms. As consumers, we have come to expect that everything will work together without incident or interruption. We think it bizarre when something in the digitally networked world does not mesh with something else, perceiving whatever it is to be broken, in need of repair. This high degree of expectation is a powerful driver of interoperability. Market players are increasingly responding to this consumer demand and making these invisible links work for their customers without any government intervention” (p. 28) [italics added]

You won’t be surprised to hear that I agree wholeheartedly! Moreover, what it really proves is that ongoing marketplace experimentation and the evolution of norms and standards generally solve interoperability problems as they develop. That doesn’t mean markets are perfectly competitive or always produce perfect interoperability. But, again, why should we believe state intervention will do a better job? And isn’t it possible that intervention could negatively counter those natural instincts that Palfrey and Gasser describe about how consumers and market actors interact to make those “invisible links” work out as nicely as they do today?

Interop, Competition & Innovation: Some Cases Studies of Evolutionary Interoperability in Action

To better explain experimental, evolutionary interop theory and how it plays out in the real-world, let’s examine the complex relationship between interoperability, competition, and innovation in the information economy through the prism of three case studies: AOL and instant messaging, video game consoles, and smartphones.

AOL

America Online’s (AOL) case study is probably the most profound example of Schumpeterian creative destruction rapidly eroding the market power of a once “dominant” digital giant. Not long ago, AOL was cast as the great villain of online openness and interoperability. In fact, when Lawrence Lessig penned his acclaimed book Code in the late 1990s, AOL was supposedly set to become the corporate enslaver of cyberspace.

For a time, it was easy to see why Lessig and others were worried. Twenty five million subscribers were willing to pay $20 per month to get a guided tour of AOL’s walled garden version of the Internet. Then AOL and media titan Time Warner announced a historic mega-merger that had some predicting the rise of “new totalitarianisms” and corporate “Big Brother.”

Fearing the worst, several conditions were placed on approval of the merger by both the Federal Trade Commission and the Federal Communication Commission. These included “open access” provisions that forced Time Warner to offer the competing ISP service from the second largest ISP at that time (Earthlink) before it made AOL’s service available across its largest cable divisions.  Another provision imposed by the FCC mandated interoperability of instant messaging systems based on the fear that AOL was poised to monopolize that emerging technology.

Palfrey and Gasser suggest this was a necessary and effective intervention. “The AOL IM case is another instance in which the role of government was key in establishing a more interoperable ecosystem” and they credit the FCC’s action with cutting AOL’s share of the IM (p. 68-9). That’s a huge stretch. The reality is that markets and technologies evolved around AOL’s walled garden and decimated whatever advantage the firm had in either the web portal business or instant messaging market.

First, despite all the hand-wringing and regulatory worry, AOL’s merger with Time Warner quickly went off the rails and AOL’s online “dominance” quickly evaporated. Looking back at the deal with TW, Fortune magazine senior editor Allan Sloan called it the “turkey of the decade” since it cost shareholders hundreds of billions. Second, AOL’s attempt to construct the largest walled garden ever also failed miserably as organic search and social networking flourished. Consumers showed they demanded more than the hand-held tour of cyberspace.

Finally, the hysteria about AOL’s threat to monopolize instant messaging and deny interoperability proved particularly unwarranted and also serves as a cautionary tale for those who argue regulation is needed to solve interoperability problems. At the time, well-heeled major competitors like Yahoo and Microsoft already had significant competing IM platforms, and others were rapidly developing. Interoperability among those systems was also spontaneously developing as consumers demanded greater flexibility among and within their communications systems. The development of Trillian, which allowed IM users to see all their various IM feeds at once, was an early precursor of what was to come. Today, anyone can download a free chat client like Digsby or Adium to manage multiple IM and email services from Yahoo!, Google, Facebook and just about anyone else, all within a single interface, essentially making it irrelevant which chat service friends use.

In a truly Schumpetrian sense, innovators came in and disrupted AOL’s plans to dominate instant messaging with innovative offerings that few critics or regulators would have believed possible just a decade ago. Progress happened, and nobody planned it from above. The FCC’s IM interoperability provision was quietly sunset less than three years after its inception since the evolution of technology and markets had rapidly eliminated the perceived problem. That mandate, as it turned out, wasn’t needed at all, and all it probably accomplished during its short life span was to hobble AOL’s ability to find a way to remain relevant in the increasingly competitive Web. 2.0 world.

Video game consoles

At first blush, the video game console wars might seem like the ideal case study for those who favor greater interoperability regulation. After all, in a static sense, why do we really need several competing video game platforms that prevent consumers from playing their games on more than one system? The lack of console interoperability also drives up development costs for game makers. Many of those developers would prefer to just code games for a single, universal gaming platform. Therefore, isn’t this the perfect excuse for state intervention to ensure “optimal interoperability”?

To the contrary, this is another example of why government should generally avoid intervening to try to achieve some sort of artificial optimal interoperability. This market has undergone continuous, turbulent change and witnessed remarkable pro-consumer innovation despite a lack of interoperability.

The video game console wars have raged since the late 1970s. The first generation of consoles was dominated Atari (2600), Mattel (Intellivision), and Coleco (ColecoVision). By the mid-1980s, the industry saw a new cast of characters displace the old players. Nintendo (NES), and Sega (Genesis) took the lead. Atari attempted a rebirth with its “Jaguar” console but failed miserably.

The demise of Atari’s 2600 console was particularly notable. When it debuted in 1977, the system revolutionized the home game market on its way to selling more than 30 million units.  For a few years, it utterly dominated the console market and the company “rushed out games, assuming that its customers would play whatever it released,” notes New York Times reporters Sam Grobart and Ian Austen. But demand rapidly dried up as other consoles and personal computers took the lead with more powerful, flexible platforms and games. In the end, “millions of unsold games and consoles were buried in a New Mexico landfill in 1983. Warner Communications, which bought Atari in 1976 for $28 million, sold it in 1984 for no cash.”

The next generation of machines was dominated by Nintendo and Sega. But by the turn of the century, more new faces appeared and disrupted the second generation of market leaders. Sony (PlayStation) and Microsoft (Xbox) introduced powerful new consoles that continue to evolve to this day. Both consoles have already cycled through three iterations, each increasingly powerful and more functional. Sega dropped out of the console business and refocused on game development. Nintendo managed to survive with its innovative “Wii” system, but has fallen from its perch as king of the console market. Many also forget Apple’s failed run at the console business with its “Pippin” system in the late 1990s. Steve Jobs killed off the console when he returned to once again lead Apple in 1997. Ironically, just a decade later, with the rise of the iPhone and the Apple App Store, the company would emerge as a major player in the gaming market as smartphone gaming exploded.

Of course, PC gaming existed across these generations and handheld gaming devices and now smartphones are also providing competition to traditional consoles. Arcade games also existed both then and now. Thus, the video game market has always been broader than just home gaming consoles.

Nonetheless, at no time during the turbulent history of this sector have major consoles interoperated. The result has been a constant effort by major console developers to leap-frog the competition with increasingly innovative and powerful consoles and peripherals. Would Microsoft have developed the Kinect motion-sensing device if Nintendo had not previously developed their game-changing Wii motion controllers? It’s impossible to know but it would seem that non-interoperability had something to do with that beneficial development. Microsoft needed a game-changing peripheral of its own to meet the Nintendo challenge since Nintendo was not about to share its innovations with the competition. Meanwhile, Sony has developed its own motion-based “Move” system to compete Microsoft and Nintendo.

This is a highly dynamic marketplace at work. Could policymakers have determined that 3 major non-interoperable home consoles would have produced so much innovation? Would they have judged that to be too much or too little competition?  Would they have been able to foresee or help bring about the disruptive competition from portable gaming devices or smartphones? What sort of interop regulation would have made that happen?

As Palfrey and Gasser suggest in their book, there really “is no single form or optimal amount of interoperability that will suit every circumstance.” The video game case study seems to prove that. Yet, their framework leaves the door open a bit wider for state meddling to determine “optimal interop.” I have little faith that state planners could have given us a more innovative video game marketplace through interop nudging. And I also worry that if the door had been open for regulators at the FCC or elsewhere to influence interoperability decisions, it might have also opened to the door to content regulation since many lawmakers have long had an appetite for video game censorship.

Smartphones

The mobile phone handset and operating system marketplace has undergone continuous change over the past 15 years and is still evolving rapidly. There are some interoperable elements, such as the ability to make connecting calls and send texts and IMs. But other parts of the smartphone ecosystem are not interoperable, such as underlying operating systems or apps and app stores.

In the midst of this mixed system of interoperable and non-interoperable elements, innovation and cut-throat competition have flourished.

When cellular telephone service first started taking off in the mid-1990s, handsets and mobile operating systems were essentially one in the same, and Nokia and Motorola dominated the sector with fairly rudimentary devices. The era of personal digital assistants (PDAs) dawned during this period, but mostly saw a series of overhyped devices, including Apple’s “Newton,” that failed to catch on. In the early 2000s, however, a host of new players and devices entered the market, many of which are still on the scene today, including LG, Sony, Samsung, Siemens, and HTC. Importantly, the sector began splitting into handsets versus operating systems (OS). Leading mobile OS makers have included: Microsoft, Palm, Symbian, BlackBerry (RIM), Apple, and Android (Google).

The sector continues to undergo rapid change and interoperability norms have evolved at the same time. Looking back, it’s hard to know whether increased interoperability would have helped or hurt the state of competition and innovation.

Consider Palm, Blackberry, and Microsoft which all limited interoperability with other systems in various ways. Palm smartphones were wildly popular for a brief time and brought many innovations to the marketplace, for example. Palm underwent many ownership and management changes, however, and rapidly faded from the scene.  After buying Palm in 2010, HP announced it would use its webOS platform in a variety of new products.  That effort failed, however, and HP instead announced it would transition webOS to an open source software development mode.

Similarly, RIM’s BlackBerry was thought to be the dominant smartphone device for a time, but it has recently been decimated. BlackBerry’s rollercoaster ride has left it “trying to avoid the hall of fallen giants” in the words of an early 2012 New York Times headline.  The company once commanded more than half of the American smartphone market but now has under 10 percent, and that number continues to fall.

Microsoft also had a huge lead in licensing its Windows Mobile OS to high-end smartphone handset makers until Apple and Android disrupted its business. It’s hard to believe now, but just a few years ago the idea of Apple or Google being serious contenders in the smartphone business was greeted with suspicion, even scorn by popular handset makers such as Nokia and Motorola. This serves as another classic example of those with a static snapshot mentality disregarding the potential for new entry and technological disruption. Just a few years later, Nokia’s profits and market share have plummeted and a struggling Motorola was purchased by Google. Meanwhile, again, Palm seems dead, BlackBerry is dying, and Microsoft is struggling to win back market share it has lost to Apple and Google in this arena.

It would seem logical to conclude that the ebbs and flows of interoperable and non-interoperable elements of the smartphone world have created a turbulent but vibrantly innovative sector. Has the lack of interoperable operating systems or apps and apps stores hurt smartphone consumers? It’s hard to see how. Mandating interoperability at either level could lead to an OS or app store monopoly, most likely for Apple if such a policy were pursued today.

While Apple has had great success and earned endless kudos for their slick, user-friendly innovations from consumers and tech wonks alike, some critics decry their proprietary business model and more “controlled” user experience. Apple tightly controls almost every level of production of its iPhone smartphone and iPad tablet. Interoperability with competing systems, standards, or technologies is limited in many ways. Is that bad? Some critics think so, suggesting that greater “openness” — presumably in the form of greater device or program interoperability — is needed. But so what? Consumers seem extremely happy with Apple devices. Moreover, well-heeled rivals like Google (Android) and Microsoft continue to innovate at a healthy clip and offer consumers a decidedly different user experience. As with video games consoles, non-interop has had some important dynamic effects and advantages for consumers. It’s hard to know what “optimal interoperability” would even look like in the modern smartphone marketplace and how it would be achieved, but it’s equally hard to believe that consumers would be significantly better off if regulators were trying to achieve it through top-down mandates on such a dynamic, fast-moving market.  [For more on this topic, see my 2011 book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters,” from the book, The Next Digital Decade.]

Case Study Summary & Analysis

These case studies suggest that defining “optimal interoperability” is a pipe dream. In some cases, consumers demanded a certain amount interoperability and they got it. But it seems equally obvious that they did not demand perfect interoperability in every case. Few consumers are tripping over their own feet in a mad rush to toss out their XBoxs or iPhones just because they are not perfectly interoperable. On the other hand, since the days of the old “walled garden” hell of AOL, CompuServe, Prodigy, and so on, it would seem that information technology markets are growing more “open” in other ways. You can’t completely lock-down a user’s online experience and expect to win their business over the long haul.

Palfrey and Gasser make that point quite nicely in the book:

Increasingly, though, businesses are seeing the merits of strategies based on openness. A growing number of businesses are pursuing models that incorporate interoperability as a core principle. More and more firms, especially in the information business, are shedding their proprietary approaches in favor of interoperability at multiple levels. The goal is not to be charitable to competitors or customers, of course, but to maximize returns over time by building an ecosystem with others that holds greater promise than the go-it-alone approach (p. 149).

Quite right, but let’s not pretend that any mass market information platforms or systems will ever be perfectly “open” or interoperable. There will always be some limitations on how such systems are used or shared. And that’s just fine once you embrace a more flexible theory of evolutionary interoperability.  Ongoing experiments will get us to a better place.

Conclusion: Let Interop Experiments Continue!

So, let me wrap up by restating my alternative theory of optimal interoperability as succinctly as possible: When in doubt, ongoing, bottom-up, dynamic experimentation will almost always yield better answers than arbitrary intervention and top-down planning. Again, that is not to say that all interoperability experiments will leave society better off in the short-term. Some interoperability experiments and resulting market norms or outcomes can create challenging dilemmas for individuals and institutions. There may be short-term spells of “market power,” for example, and some standards may get locked in longer than some of us think makes sense. If, however, we have faith in humans to solve problems with information and technology, then still more experimentation — not state intervention — is the answer. And that is especially true once you accept the fact that those seeking to intervene have very limited knowledge of all the relevant facts needed to even make wise decisions about the future course of technology markets or information systems.

Some will find my alternative theory of optimal interoperability no more satisfying than Palfrey and Gasser’s since they may find the experimental interop framework too inflexible when it comes to state action. Whereas the frustration with Palfrey and Gasser’s theory will likely flow from their failure to define a coherent standard for when intervention is warranted, my approach solves that problem by suggesting we should largely abandon the endeavor and instead let ongoing market experiments solve interop problems over time. For me, we would need to find ourselves in a veritable whole-world-is-about-to-go-to-hell sort of moment before I could go along with state intervention to tip the interop scales in one direction or another. And, generally speaking, this is exactly the sort of thing that antitrust laws are supposed to address after a clear showing of harm to consumer welfare. Stated differently, to the extent any state intervention to address interoperability can be justified, ex post antitrust remedies should almost always trump ex ante regulatory meddling.

This alternative vision of evolutionary, experimental interoperability will be rejected by those who believe the state has the ability to wisely intervene and nudge markets to achieve “optimal interoperability” through some sort of Goldilocks principle that can supposedly get it just right. For those of us who have doubts about the likelihood of such sagacious state action — especially for fast-paced information sectors — the benefits of ongoing marketplace experimentation far outweigh the costs of letting those experiments run their course.

Regardless, we should be thankful that John Palfrey and Urs Gasser have provided us with a book that so perfectly frames what should be a very interesting ongoing debate over these issues. I encourage everyone to pick up a copy of Interop so you can join us in this important discussion.

_______________________

Additional Reading:

Previous post:

Next post: