Book Review: Brown & Marsden’s “Regulating Code”

by on June 27, 2013 · 0 comments

Regulating Code book coverIan Brown and Christopher T. Marsden’s new book, Regulating Code: Good Governance and Better Regulation in the Information Age, will go down as one of the most important Internet policy books of 2013 for two reasons. First, their book offers an excellent overview of how Internet regulation has unfolded on five different fronts: privacy and data protection; copyright; content censorship; social networks and user-generated content issues; and net neutrality regulation. They craft detailed case studies that incorporate important insights about how countries across the globe are dealing with these issues. Second, the authors endorse a specific normative approach to Net governance that they argue is taking hold across these policy arenas. They call their preferred policy paradigm “prosumer law” and it envisions an active role for governments, which they think should pursue “smarter regulation” of code.

In terms of organization, Brown and Marsden’s book follows the same format found in Milton Mueller’s important 2010 book Networks and States: The Global Politics of Internet Governance; both books feature meaty case studies in the middle bookended by chapters that endorse a specific approach to Internet policymaking. (Incidentally, both books were published by MIT Press.) And, also like Mueller’s book, Brown and Marsden’s Regulating Code does a somewhat better job using case studies to explore the forces shaping Internet policy across the globe than it does making the normative case for their preferred approach to these issues.

Thus, for most readers, the primary benefit of reading either book will be to see how the respective authors develop rich portraits of the institutional political economy surrounding various Internet policy issues over the past 10 to 15 years. In fact, of all the books I have read and reviewed in recent years, I cannot think of two titles that have done a better job developing detailed case studies for such a diverse set of issues. For that reason alone, both texts are important resources for those studying ongoing Internet policy developments.

That’s not to say that both books don’t also make a solid case for their preferred policy paradigms, it’s just that the normative elements of the texts are over-shadowed by the excellent case studies. As a result, readers are left wanting more detail about what their respective policy paradigms would (or should) mean in practice. Regardless, in the remainder of this review, I’ll discuss Brown and Marsden’s normative approach to digital policy and contrast it with Mueller’s since they stand in stark contrast and help frame the policy battles to come on this front.

Governing Cyberspace: Mueller vs. Brown & Marsden

Mueller’s normative goal in Networks and States was to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but which has lost favor in recent years. He made the case for a “cyberliberty” movement rooted in what he described as a “denationalized liberalism” vision of Net governance. He argued that “we need to find ways to translate classical liberal rights and freedoms into a governance framework suitable for the global Internet. There can be no cyberliberty without a political movement to define, defend, and institutionalize individual rights and freedoms on a transnational scale.”

I wholeheartedly endorsed that vision in my review of Mueller’s book, even if he was a bit short on the details of how to bring it about. But it is useful to keep Mueller’s paradigm in mind because it provides a nice contrast with the approach Brown and Marsden advocate, which is quite different.

Generally speaking, Brown and Marsden reject most forms of “Internet exceptionalism” and certainly reject the sort of “cyberliberty” ethos that Mueller and I embrace. They instead endorse a fairly broad role for governments in ordering the affairs of cyberspace. In their self-described “prosumer” paradigm, the State is generally viewed as benevolent actor, well-positioned to guide the course of code development toward supposedly more enlightened ends.

Consistent with the strong focus on European policymaking found throughout the book, the authors are quite enamored with the “co-regulatory” models that have become increasing prevalent across the continent. Like many other scholars and policy advocates today, they occasionally call for “multi-stakeholderism” as a solution but they do not necessarily mean the sort of truly voluntary, bottom-up multi-stakeholderism of the Net’s early days. Rather, they are usually thinking of multi-stakeholderism as what is essentially pluralistic politics; it’s the government setting the table, inviting the stakeholders to it, and then guiding (or at least “nudging”) policy along the way. “We are convinced that fudging with nudges needs to be reinforced with the reality of regulation and coregulation, in order to enable prosumers to maximize their potential on the broadband Internet,” they say. (p. 187)

Meet the New Boss, Same as the Old Boss?

Thus, despite the new gloss, their “prosumer law” paradigm ends up sounding quite a bit like a rehash of traditional “public interest” law and common carrier regulation, albeit with a new appreciation of just how dynamics markets built on code can be. Indeed, Brown and Marsden repeatedly acknowledge how often law and regulation fails to keep pace with the rapid evolution of digital technology. “Code changes quickly, user adoption more slowly, legal contracting and judicial adaptation to new technologies slower yet, and regulation through legislation slowest of all,” they correctly note (p. xv). This reflects what Larry Downes refers to as the most fundamental “law of disruption” of the digital age: “technology changes exponentially, but social, economic, and legal systems change incrementally.”

At the end of the day, however, that insight doesn’t seem to inform Brown and Marsden’s policy prescriptions all that much. Theirs is a world in which policy tinkering errors will apparently be corrected promptly and efficiently by still more policy tinkering, or “smarter regulation.” Moreover, like many other Internet policy scholars today, they don’t mind regulatory interventions that come early and often since they believe that will help regulators get out ahead of the technological curve and steer markets in preferred directions. “If regulators fail to address regulatory objects at first, then the regulatory object can grow until its technique overwhelms the regulator,” they say (p. 31).

This is the same mentality that is often on display in Tim Wu’s work, which I have been quite critical of here and elsewhere. For example, Wu has advocated informal “agency threats” and the use of “threat regimes” to accomplish policy goals that prove difficult to steer though the formal democratic rulemaking process. As part of his “defense of regulatory threats in particular contexts,” Wu stresses the importance of regulators taking control of fast-moving tech markets early in their life cycles. “Threat regimes,” Wu argues, “are best justified when the industry is undergoing rapid change — under conditions of ‘high uncertainty.’ Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known,” Wu concludes.

This is essentially where most of the “co-regulation” schemes that Brown and Marsden favor would take us: Code regulators would take an active role in shaping the evolution of digital technologies and markets early in its life cycle. What are the preferred regulatory mechanisms? Like Wu and many other cyberlaw professors today, Brown and Marsden favor robust interconnection and interoperability mandates bolstered by antitrust actions as well. And, again, they aren’t willing to wait around and let the courts adjudicate these issues in an ex post fashion. “Essential facilities law is a very poor substitute for the active role of prosumer law that we advocate, especially in its Chicago school minimalist phase” (p. 185). In other words, we shouldn’t wait for someone to bring a case and litigate it through the courts when preemptive, proactive regulatory interventions can sagaciously steer us to a superior end.

More specifically, they propose that “competition authorities should impose ex ante interoperability requirements upon dominant social utilities… to minimize network barriers” (p. 190) and they model this on traditional regulatory schemes such as must-carry obligations, API interface disclosure requirements, and other interconnection mandates (such as those imposed on AOL/Time Warner a decade ago to alleviate fears about instant messaging dominance). They also note that “Effective, scalable state regulation often depends on the recruitment of intermediaries as enforcers” to help achieve various policy objectives (p. 170).

The Problem with Interoperability Über Alles

So, in essence, the Brown-Marsden Internet policy paradigm might be thought of as interoperability über alles. Interoperability and interconnection in pursuit of more “open” and “neutral” systems is generally considered an unalloyed good and most everything else is subservient to this objective.

This is a serious policy error and one that I address in great detail in my absurdly long review of John Palfrey and Urs Gasser’s Interop: The Promise and Perils of Highly Interconnected Systems. I’m not going to repeat all 6,500 words of that critique here when you can just click back and read it, but here’s the high level summary: There is no such thing as “optimal interoperability” that can be determined in an a priori fashion. Ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.

More importantly, when interoperability is treated as sacrosanct and forcibly imposed through top-down regulatory schemes, it will often have many unintended consequences and costs. It can even lock in existing market power and market structures by encouraging users and companies to flock to a single platform instead of trying to innovate around it. (Go back and take a look at how the “Kingsbury Commitment” — the interconnection deal from the early days of the U.S. telecom system — actually allowed AT&T to gain greater control over the industry instead of assisting independent operators.)

Citing Palfrey and Gasser, Brown and Marsden do note that “mandated interoperability is neither necessary in all cases nor necessarily desirable” (p. 32), but they don’t spend as much time as Palfrey and Gasser itemizing these trade-offs and the potential downsides of some interoperability mandates. But what frustrates me about both books is the almost quasi-religious reverence accorded to interoperability and open standards when such faith is simply not warranted after historical experience is taken into consideration.

Plenty of the best forms of digital innovation today are due to a lack of interoperability and openness. Proprietary systems have produced some of the most exciting devices (iPhone) and content (video games) of modern times. Then again, voluntary interoperable and “open” services and devices thrive, too. The key point here — and one that I develop in far greater detail in my book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters” — is that the market for digital services is working marvelously and providing us with choices of many different flavors. Innovation continues to unfold rapidly in both directions along the “open” vs. “closed” continuum. (Here are 30 more essays I have written on this topic if you need more proof.)

Generally speaking, we should avoid mandatory interop and openness solutions. We should instead push those approaches and solutions in a truly voluntary, bottom-up fashion. And, more importantly, we should be pushing for outside-the-box solutions of the Schumpeterian (creative destruction / disruptive innovation) variety instead of surrendering so quickly on competition through forced sharing mandates.

The Case for Patience & Policy Restraint

But Brown and Marsden clearly do not subscribe to that sort of Schumpeterian thinking. They think most code markets tip and lock into monopoly in fairly short order and that only wise interventions can rectify that. For example, they claim that Facebook’s “monopoly is now durable,” which will certainly come as a big surprise to the millions of us who do not use it all. And the story of MySpace’s rapid rise and equally precipitous fall has little bearing on this story, they argue.

But, no matter how you define the “social networking market,” here are two facts about it: First, it is still very, very young. It’s only about a decade old. Second, in that short period of time, we have already witnessed the entire first generation of players fall by the wayside. While the second generation is currently dominated by Facebook, it is by no means alone. Again, millions like me don’t use it at all and get along just fine with other “social networking” technologies, including Twitter, LinkedIn, Google+, and even older tech like email, SMS, and yes, phone calls! Accusations of “monopoly” in this space strain credulity in the extreme. I invite you to read my Mercatus working paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” for a more thorough debunking of this logic. (Note: The final version of that paper will be published in the CommLaw Conspectus shortly.)

Such facts should have a bearing on the debate about regulatory interventions. We continue to witness the power of Schumpeterian rivalry as new and existing players battle in a race for the prize of market power. Brown and Marsden fear that the race is already over in many sectors and that it is time to throw in the towel and get busy regulating. But when I look around at the information technology marketplace today, I am astonished just how radically different it looks from even just a few years ago, and not just in the social media market. I have written extensively about the smartphone marketplace, where innovation continues at a frantic pace. As I noted in my essay here on “Smartphones & Schumpeter,” it’s hard to remember now, but just 6 short years ago:

  • The iPhone and Android had not yet landed.
  • Most of the best-selling phones of 2007 were made by Nokia and Motorola.
  • Feature phones still dominated the market; smartphones were still a luxury (and a clunky luxury at that).
  • There were no app stores and what “apps” did exist were mostly proprietary and device or carrier-specific; and,
  • There was no 4G service.

It’s also easy to forget just how many market analysts and policy wonks were making absurd predictions at the time about how the telecom operators at the time had so much market power that they would crush new innovation without regulation. Instead, in very short order, the market was completely upended in a way that mobile providers never saw coming. There was a huge shift in relative market power flowing from the core of these markets to the fringes, especially to Apple, which wasn’t even a player in that space before the launch of the iPhone.

As I noted in concluding that piece last year, these facts should lead us to believe that this is a healthy, dynamic marketplace in action. Not even Schumpeter could have imagined creative destruction on this scale. (Just look as BlackBerry). But much the same could be said of many other sectors of the information economy.  While it is certainly true that many large players exist, we continue to see a healthy amount of churn in these markets and an astonishing amount of technological innovation.

Public Choice Insights: What History Tells Us

One would hope these realities would have a greater bearing on the policy prescriptions suggested by analysts like Brown and Marsden, but they don’t seem to. Instead, the attitude on display here is that governments can, generally speaking, act wisely and nudge efficiently to correct short-term market hiccups and set us on a better course. But there are strong reasons to question that presumption.

Specifically, what I found most regrettable about Brown and Marsden’s book was the way — like all too many books in this field these days — the authors briefly introduce “public choice” insights and concerns only to summarily dismiss them as unfounded or overblown. (See my review of Brett Frischmann’s book, Infrastructure: The Social Value of Shared Resources for a more extended discussion of this problem as it pertains to discussions about not just infrastructure regulation by the regulation of all complex industries and technologies.)

Brown and Marsden make it clear that their intentions are pure and that their methods would incorporate the lessons of the past, but they aren’t very interested in dwelling on the long, lamentable history of regulatory failures and capture in the communications and media policy sectors. They do note the dangers of a growing “security-industrial complex” and argue that “commercial actors dominate technical actors in policy debates.” They also say that the “potential for capture by regulated interests, especially large corporate lobbies, is an essential insight” that informs their approach. The problem is that it really doesn’t. They largely ignore those insights and instead imply that, to the extent this is a problem at all, we can build a better breed of bureaucrats going forward who will craft “smarter regulation” that is immune from such pressures. Or, they claim that “multi-stakeholderism” — again, the new, more activist and government-influenced conception of it — can overcome these public choice problems.

A better understanding of power politics that is informed by the wisdom of the ages would instead counsel that minimizing the scope of politicization of technology markets is the better remedy. Capture and cronyism in communications and media markets has always grown in direct proportion to the overall scope of law governing those sectors. (I invite you to read all the troubling examples of this that Brent Skorup and I have documented in our new 72-page working paper, “A History of Cronyism and Capture in the Information Technology Sector.” Warning: It makes for miserable reading but proves beyond any doubt that there is something to public choice concerns.)

To be clear, it’s not that I believe that “market failures” or “code failures” never occur, rather, as I noted in this debate with Larry Lessig, it’s that such problems are typically “better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).” It’s not just that traditional regulatory remedies cannot keep pace with code markets, it’s that those attempting to craft the remedies do not possess the requisite knowledge needed to know how to steer us down a superior path. (See my essay, “Antitrust & Innovation in the New Economy: The Problem with the Static Equilibrium Mindset,” for more on that point.)

Regardless, at a minimum, I expect scholars to take seriously the very real public choice problems at work in this arena. You cannot talk about the history of these sectors without acknowledging the horrifically anti-consumer policies that were often put in place at the request of one industry or another to shield themselves from disruptive innovation. No amount of wishful thinking about “prosumer” policies will change these grim political realities. Only by minimizing chances to politicize technology markets and decisions can we overcome these problems.

Conclusion

For those of us who prefer to focus on freeing code, Brown and Marsden’s Regulating Code is another reminder that liberty is increasingly a loser in Internet policy circles these days. Milton Mueller’s dream of decentralized, denationalized liberalism seems more and more unlikely as armies of policymakers, regulators, special interests, regulatory advocates, academics, and others all line up and plead for their pet interest or cause to be satisfied through pure power politics. No matter what you call it — fudging, nudging, coregulation, smart regulation, multistakeholderism, prosumer law, or whatever else, — there is no escaping the fact that we are witnessing the complete politicization of almost every facet of code creation and digital decisionmaking today.

Despite my deep reservations about a more politicized cyberspace, Brown and Marsden’s book is an important text because it is one of the most sophisticated articulations and defenses of it to date. Their book also helps us better understand the rapidly developing institutional political economy of Internet regulation in both broad and narrow policy contexts. Thus, it is worth your time and attention even if, like me, you are disheartened to be reading yet another Net policy book that ultimately endorses mandates over of markets as the primary modus operandi of the information age.

____________

Additional Resources about the book:

Other books you should read alongside “Regulating Code” (links are for my reviews of each):

Previous post:

Next post: