Articles by Adam Thierer

Adam ThiererSenior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.


In a New York Times op-ed this weekend entitled “You Can’t Say That on the Internet,” Evgeny Morozov, author of The Net Delusion, worries that Silicon Valley is imposing a “deeply conservative” “new prudishness” on modern society. The cause, he says, are “dour, one-dimensional algorithms, the mathematical constructs that automatically determine the limits of what is culturally acceptable.” He proposes that some form of external algorithmic auditing be undertaken to counter this supposed problem. Here’s how he puts it in the conclusion of his essay:

Quaint prudishness, excessive enforcement of copyright, unneeded damage to our reputations: algorithmic gatekeeping is exacting a high toll on our public life. Instead of treating algorithms as a natural, objective reflection of reality, we must take them apart and closely examine each line of code.

Can we do it without hurting Silicon Valley’s business model? The world of finance, facing a similar problem, offers a clue. After several disasters caused by algorithmic trading earlier this year, authorities in Hong Kong and Australia drafted proposals to establish regular independent audits of the design, development and modifications of computer systems used in such trades. Why couldn’t auditors do the same to Google?

Silicon Valley wouldn’t have to disclose its proprietary algorithms, only share them with the auditors. A drastic measure? Perhaps. But it’s one that is proportional to the growing clout technology companies have in reshaping not only our economy but also our culture.

It should be noted that in a Slate essay this past January, Morozov had also proposed that steps be taken to root out lies, deceptions, and conspiracy theories on the Internet.  Morozov was particularly worried about “denialists of global warming or benefits of vaccination,” but he also wondered how we might deal with 9/11 conspiracy theorists, the anti-Darwinian intelligent design movement, and those that refuse to accept the link between HIV and AIDS.

To deal with that supposed problem, he recommended that Google “come up with a database of disputed claims” or “exercise a heavier curatorial control in presenting search results,” to weed out such things. He suggested that the other option “is to nudge search engines to take more responsibility for their index and exercise a heavier curatorial control in presenting search results for issues” that someone (he never says who) determines to be conspiratorial or anti-scientific in nature.

Taken together, these essays can be viewed as a preliminary sketch of what could become a comprehensive information control apparatus instituted at the code layer of the Internet. Continue reading →

Here’s a presentation I delivered on “The War on Vertical Integration in the Digital Economy” at the latest meeting of the Southern Economic Association this weekend. It outlines concerns about vertical integration in the tech economy and specifically addresses regulatory proposals set forth by Tim Wu (arguing for a “separations principle” for the tech economy) & Jonathan Zittrain (arguing for “API neutrality” for social media and digital platforms). This presentation is based on two papers published by the Mercatus Center at George Mason University: “Uncreative Destruction: The Misguided War on Vertical Integration in the Information Economy” (with Brent Skorup) & “The Perils of Classifying Social Media Platforms as Public Utilities.”

Here’s a presentation I’ve been using lately for various audiences about “Cronyism: History, Costs, Case Studies and Solutions.” In the talk, I offer a definition of cronyism, explain its origins, discuss how various academics have traditionally thought about it, outline a variety of case studies, and then propose a range of solutions. Readers of this blog might be interested because I briefly mention the rise of cronyism in the high-tech sector. Brent Skorup and I have a huge paper in the works on that topic, which should be out early next year.

Also, here’s a brief video of me discussing why corporate welfare doesn’t work, which was shot after I recently made this presentation at an event down in Florida. Continue reading →

The precautionary principle generally states that new technologies should be restricted or heavily regulated until they are proven absolutely safe. In other words, out of an abundance of caution, the precautionary principle holds that it is “better to be safe than sorry,” regardless of the costs or consequences. The problem with that, as Kevin Kelly reminded us in his 2010 book, What Technology Wants, is that because “every good produces harm somewhere… by the strict logic of an absolute Precautionary Principle no technologies would be permitted.” The precautionary principle is, in essence, the arch-enemy of progress and innovation. Progress becomes impossible when experimentation and trade-offs are considered unacceptable.

I was reminded of that fact while reading this recent piece by Marc Scribner in the Washington Post, “Driverless Cars Are on the Way. Here’s How Not to Regulate Them.” Scribner highlights the efforts of the D.C. Council to regulate autonomous vehicles. A new bill introduced by Council member Mary Cheh (D-Ward 3) proposes several preemptive regulations before driverless autos would be allowed on the streets of Washington. Scribner summarizes the provisions of the bill and their impact: Continue reading →

Yesterday it was my privilege to speak at a Free State Foundation (FSF) event on “Ideas for Communications Law and Policy Reform in 2013.” It was moderated by my friend and former colleague Randy May, who is president of FSF, and the event featured opening remarks from the always-excellent FCC Commissioner Robert McDowell.

During the panel discussing that followed, I offered my thoughts about the problem America continues to face in cleaning up communications and media law and proposed a few ideas to get reform done right once and for all. I don’t have time to formally write-up my remarks, but I thought I would just post the speech notes that I used yesterday and include links to the relevant supporting materials. (I’ve been using a canned version of this same speech at countless events over the past 15 years. Hopefully lawmakers will take up some of these reforms some time soon so I’m not using this same set of remarks in 2027!)

Continue reading →

We spend a lot of time here defending the simple proposition that flexible free-market pricing is a good thing. You would think that in 2012 we wouldn’t need to do so, but there’s a growing movement afoot today by some academics, regulatory activists, and public policymakers to have government start asserting more authority over broadband pricing. In particular, they want Congress, the FCC, or state officials to investigate and possibly even regulate efforts by wireline and wireless broadband carriers to use usage-based pricing and data caps as a method of calibrating supply and demand. This was the focus of my last weekly Forbes column, “The Specter Of Broadband Price Controls.” In the piece I note that:

Data caps and usage-based pricing are forms of what economists refer to as price discrimination. Although viewed with suspicion by some policymakers and regulatory-minded academics and activists, price discrimination is widely recognized to improve consumer welfare. Price-differentiated and prioritized services are part of almost every industrial sector in our capitalist economy. Notable examples include airline and hotel reservations, prioritized shipping services, amusement park passes, and fuel and energy pricing. Economists agree that price discrimination represents a sensible way to calibrate supply and demand while ensuring the fixed costs of doing business get covered. Consumers benefit from such pricing experimentation by gaining more options while firms gain more certainty about investment and service decisions.

This is confirmed by an excellent new Mercatus Center working paper on “The Impact of Data Caps and Other Forms of Usage-Based Pricing for Broadband Access,” by Daniel A. Lyons, an assistant professor of law at Boston College Law School. Lyons explains why a return to price controls for communications would be monumentally misguided. Continue reading →

[UPDATE 4/30/13: This article was subsequently published in Volume 65, Issues 2 of the Federal Communications Law Journal in April 2013. The links below now point to the final FCLJ version.]

The Mercatus Center at George Mason University has just released a new paper by Brent Skorup and me entitled, “Uncreative Destruction: The War on Vertical Integration in the Information Economy.”  Brent, who is the research director for the Information Economy Project at the George Mason University School of Law, and I have been working on this paper since the Spring and we are looking forward to getting it published in a law review shortly. The paper focuses on Tim Wu’s “separations principle” for the digital economy, something I’ve spent some time critiquing here in the past. Here’s the introduction from the 44-page paper that Brent and I just released:

Are information sectors sufficiently different from other sectors of the economy such that more stringent antitrust standards should be applied to them preemptively? Columbia Law School professor Tim Wu responds in the affirmative in his book The Master Switch: The Rise and Fall of Information Empires. Having successfully pushed net-neutrality regulation into the policy spotlight, Wu has turned his attention to what he regards as excessive market concentration and threats to free speech throughout the entire information economy.To support his call for increased antitrust intervention, Wu explains his view of competition in the information economy—a view that deviates substantially from current mainstream antitrust theory.

Continue reading →

Looking for a concise overview of how Internet architecture has evolved and a principled discussion of the public policies that should govern the Net going forward? Then look no further than Christopher Yoo‘s new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network. It’s a quick read (just 140 pages) and is worth picking up.  Yoo is a Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and also serves as the Director of the Center for Technology, Innovation & Competition there. For those who monitor ongoing developments in cyberlaw and digital economics, Yoo is a well-known and prolific intellectual who has established himself as one of the giants of this rapidly growing policy arena.

Yoo makes two straight-forward arguments in his new book. First, the Internet is changing. In Part 1 of the book, Yoo offers a layman-friendly overview of the changing dynamics of Internet architecture and engineering. He documents the evolving nature of Internet standards, traffic management and congestion policies, spam and security control efforts, and peering and pricing policies. He also discusses the rise of peer-to-peer applications, the growth of mobile broadband, the emergence of the app store economy, and what the explosion of online video consumption means for ongoing bandwidth management efforts. Those are the supply-side issues. Yoo also outlines the implications of changes in the demand-side of the equation, such as changing user demographics and rapidly evolving demands from consumers. He notes that these new demand-side realities of Internet usage are resulting in changes to network management and engineering, further reinforcing changes already underway on the supply-side.

Yoo’s second point in the book flows logically from the first: as the Internet continues to evolve in such a highly dynamic fashion, public policy must as well. Yoo is particularly worried about calls to lock in standards, protocols, and policies from what he regards as a bygone era of Internet engineering, architecture, and policy. “The dramatic shift in Internet usage suggests that its founding architectural principles form the mid-1990s may no longer be appropriate today,” he argues. (p. 4) “[T]he optimal network architecture is unlikely to be static. Instead, it is likely to be dynamic over time, changing with the shifts in end-user demands,” he says. (p. 7) Thus, “the static, one-size-fits-all approach that dominates the current debate misses the mark.” (p. 7) Continue reading →

I’ve been hearing more rumblings about “API neutrality” lately. This idea, which originated with Jonathan Zittrain’s book, The Future of the Internet–And How to Stop It, proposes to apply Net neutrality to the code/application layer of the Internet. A blog called “The API Rating Agency,” which appears to be written by Mehdi Medjaoui, posted an essay last week endorsing Zittrain’s proposal and adding some meat to the bones of it. (My thanks to CNet’s Declan McCullagh for bringing it to my attention).

Medjaoui is particularly worried about some of Twitter’s recent moves to crack down on 3rd party API uses. Twitter is trying to figure out how to monetize its platform and, in a digital environment where advertising seems to be the only business model that works, the company has decided to establish more restrictive guidelines for API use. In essence, Twitter believes it can no longer be a perfectly open platform if it hopes to find a way to make money. The company apparently believes that some restrictions will need to be placed on 3rd party uses of its API if the firm hopes to be able to attract and monetize enough eyeballs.

While no one is sure whether that strategy will work, Medjaoui doesn’t even want the experiment to go forward. Building on Zittrain, he proposes the following approach to API neutrality:

  • Absolute data to 3rd party non-discrimination : all content, data, and views equally distributed on the third party ecosystem. Even a competitor could use an API in the same conditions than all others, with not restricted re-use of the data.
  • Limited discrimination without tiering : If you don’t pay specific fees for quality of service, you cannot have a better quality of service, as rate limit, quotas, SLA than someone else in the API ecosystem.If you pay for a high level Quality of service, so you’ll benefit of this high level quality of service, but in the same condition than an other customer paying the same fee.
  • First come first served : No enqueuing API calls from paying third party applications, as the free 3rd-party are in the rate limits.

Before I critique this, let’s go back and recall why Zittrain suggested we might need API neutrality for certain online services or digital platforms. Continue reading →

Psychologists Daniel Simons and Christopher Chabris had an interesting editorial in The Wall Street Journal this weekend asking, “Do Our Gadgets Really Threaten Planes?” They conducted an online survey of 492 American adults who have flown in the past year and found that “40% said they did not turn their phones off completely during takeoff and landing on their most recent flight; more than 7% left their phones on, with the Wi-Fi and cellular communications functions active. And 2% pulled a full Baldwin, actively using their phones when they weren’t supposed to.”

Despite the widespread prevalence of such law-breaking activity, planes aren’t falling from the sky and yet the Federal Aviation Administration continues to enforce the rule prohibiting the use of digital gadgets during certain times during flight. “Why has the regulation remained in force for so long despite the lack of solid evidence to support it?” Simons and Chabris ask. They note:

Human minds are notoriously overzealous “cause detectors.” When two events occur close in time, and one plausibly might have caused the other, we tend to assume it did. There is no reason to doubt the anecdotes told by airline personnel about glitches that have occurred on flights when they also have discovered someone illicitly using a device. But when thinking about these anecdotes, we don’t consider that glitches also occur in the absence of illicit gadget use. More important, we don’t consider how often gadgets have been in use when flights have been completed without a hitch. Our survey strongly suggests that there are multiple gadget violators on almost every flight.

That’s all certain true, but what actually motivated this ban — and has ensured its continuation despite a lack of evidence it is needed to diminish technological risk — is the precautionary principle. As the authors correct note: Continue reading →