Economics

by Walter Stover and Anne Hobson

Franklin Foer’s article in the Atlantic on Jeff Bezos’s master plan offers insight into the mind of the famed CEO, but his argument that Amazon is all-powerful is flawed. Foer overlooks the role of consumers in shaping Amazon’s narrative. In doing so, he overestimates the actual autonomy of Bezos and the power of Amazon over its consumers. 

The article falls prey to an atomistic theory of Amazon. The thinking goes like this: I am an atom, and Amazon is a (much) larger atom. Because Amazon is so much larger than I am, I need some intervening force to ensure that Amazon does not prey on me. This intervening force must belong to an even larger atom (the U.S. government) in order to check Amazon’s power. The atomistic lens sees individuals as interchangeable and isolated from each other, able to be considered one at a time.

Foer’s application of this theory appears in his treatment of Hayek, one of the staunchest opponents of aggregation and atomism. For example, when he summarizes Hayek’s paper “The Use of Knowledge in Society,” he phrases Hayek’s argument as that “…no bureaucracy could ever match the miracle of markets, which spontaneously and efficiently aggregate the knowledge of a society.” Hayek found the notion of aggregation highly problematic, as seen in another of his articles, “Competition as a Discovery Procedure,” in which he criticizes the idea of a “scientific” objective approach to measuring market variables. His argument against trying to build a science on macroeconomic variables notes that “…the coarse structure of the economy can exhibit no regularities that are not the results of the fine structure… and that those aggregate or mean values… give us no information about what takes place in the fine structure.”

Neither Amazon nor the market can aggregate the knowledge of a society. We can try to speak of the market in aggregate terms, but we end up summing up all of the differences between individuals and concealing the action and agency of the individuals at the bottom. We cannot speak of market activity without reference to the patterns of individual interactions. It is best to think of the market as an emergent, unintended outcome of a constellation of individual actors, not atoms, each of whom have different talents, wants, knowledge, and resources. Actors enter into exchanges with each other and form complicated, semi-rigid, multi-leveled social networks.

Continue reading →

by Adam Thierer and Trace Mitchell

This essay originally appeared on The Washington Examiner on September 12, 2019.

You won’t find President Trump agreeing with Hillary Clinton and Barack Obama on many issues, but the need for occupational licensing reform is one major exception. They, along with many other politicians and academics both Left and Right, have identified how state and local “licenses to work” restrict workers’ opportunities and mobility while driving up prices for consumers.

Of course, not everybody has to agree with high-profile Democrats and Republicans, but let’s at least welcome the chance to discuss something important without defaulting to our partisan bunkers.

This past week, for example, ThinkProgress published an article titled “Koch Brothers’ anti-government group promotes allowing unlicensed, untrained cosmetologists.” Centered around an Americans for Prosperity video highlighting the ways in which occupational licensing reform could lower some of the barriers that prevent people from bettering their lives, the article painted a picture of an ideologically driven, right-wing movement.

In reality, it’s anything but that. Continue reading →

Jaron Lanier was featured in a recent New York Times op-ed explaining why people should get paid for their data. Under this scheme, he estimates the total value of data for a four person household could fetch around $20,000. 

Let’s do the math on that.

Data from eMarketer finds that users spend about an hour and fifteen minutes per day on social media for a total of 456.25 hours per year. Thus, by Lanier’s estimates, the income from data would be about $10.95 per hour. That’s not too bad!

By any measure, however, the estimate is high. Since I have written extensively on this subject (see this, this, and this), I thought it might be helpful to explain the four general methods used to value an intangibles like data. They include income methods, market rates, cost methods, and finally, shadow prices.  Continue reading →

This essay was originally published on the AIER blog on August 8, 2019.

In a new Atlantic essay, Patrick Collison and Tyler Cowen suggest that, “We Need a New Science of Progress,” which, “would study the successful people, organizations, institutions, policies, and cultures that have arisen to date, and it would attempt to concoct policies and prescriptions that would help improve our ability to generate useful progress in the future.” Collison and Cowen refer to this project as Progress Studies.

Is such a field of study possible, and would it really be a “science”? I think the answer is yes, but with some caveats. Even if it proves to be an inexact science, however, the effort is worth undertaking. 

Thinking about Progress

Progress Studies is a topic I have spent much of my life thinking and writing about, most recently in my book, Permissionless Innovation as well as a new paper on “Technological Innovation and Economic Growth,” co-authored with James Broughel. My work has argued that nations that are open to risk-taking, trial-and-error experimentation, and technological dynamism (i.e., “permissionless innovation”) are more likely to enjoy sustained economic growth and prosperity than those rooted in precautionary principle thinking and policies (i.e., prior restraints on innovative activities). A forthcoming book of mine on the future of entrepreneurialism and innovation will delve even deeper into these topics and address criticisms of technological advancement.

Continue reading →

When it comes to the threat of automation, I agree with Ryan Khurana: “From self-driving car crashes to failed workplace algorithms, many AI tools fail to perform simple tasks humans excel at, let alone far surpass us in every way.” Like myself, he is skeptical that automation will unravel the labor market, pointing out that “[The] conflation of what AI ‘may one day do’ with the much more mundane ‘what software can do today’ creates a powerful narrative around automation that accepts no refutation.”

Khurana marshals a number of examples to make this point:

Google needs to use human callers to impersonate its Duplex system on up to a quarter of calls, and Uber needs crowd-sourced labor to ensure its automated identification system remains fast, but admitting this makes them look less automated…

London-based investment firm MMC Ventures found that out of the 2,830 startups they identified as being “AI-focused” in Europe, 40% used no machine learning tools, whatsoever.

I’ve been collecting examples of the AI hype machine as well. Here are some of my favorites. Continue reading →

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology. Continue reading →

A decade ago, a heated debate raged over the benefits of “a la carte” (or “unbundling”) mandates for cable and satellite TV operators. Regulatory advocates said consumers wanted to buy all TV channels individually to lower costs. The FCC under former Republican Chairman Kevin Martin got close to mandating a la carte regulation.

But the math just didn’t add up. A la carte mandates, many economists noted, would actually cost consumers just as much (or even more) once they repurchased all the individual channels they desired. And it wasn’t clear people really wanted a completely atomized one-by-one content shopping experience anyway.

Throughout media history, bundles of all different sorts had been used across many different sectors (books, newspapers, music, etc.). This was because consumers often enjoyed the benefits of getting a package of diverse content delivered to them in an all-in-one package. Bundling also helped media operators create and sustain a diversity of content using creative cross-subsidization schemes. The traditional newspaper format and business is perhaps the greatest example of media bundling. The classifieds and sports sections helped cross-subsidize hard news (especially local reporting). See this 2008 essay by Jeff Eisenach and me for details for more details on the economics of a la carte.

Yet, with the rise of cable and satellite television, some critics protested the use of bundles for delivering content. Even though it was clear that the incredible diversity of 500+ channels on pay TV was directly attributable to strong channels cross-subsidizing weaker ones, many regulatory advocates said we would be better off without bundles. Moreover, they said, online video markets could show us the path forward in the form of radically atomized content options and cheaper prices.

Flash-forward to today. Continue reading →

Why should we really care about technological innovation? My Mercatus Center colleague James Broughel and I have just published a paper answering that question. In “Technological Innovation and Economic Growth: A Brief Report on the Evidence,” we summarize the extensive body of evidence that discusses the relationship between innovation, growth, and human prosperity. We note that while economists, political scientists, and historians don’t agree on much, there exists widespread consensus among them that there is a symbiotic relationship between the pace of innovation and the progress of civilization. Our 27-page paper documenting the academic evidence on this issue can be downloaded on SSRN or from the Mercatus website. Here’s the abstract:

Technological innovation is a fundamental driver of economic growth and human progress. Yet some critics want to deny the vast benefits that innovation has bestowed and continues to bestow on mankind. To inform policy discussions and address the technology critics’ concerns, this paper summarizes relevant literature documenting the impact of technological innovation on economic growth and, more broadly, on living standards and human well-being. The historical record is unambiguous regarding how ongoing innovation has improved the way we live; however, the short-term disruptive aspects of technological change are real and deserve attention as well. The paper concludes with an extended discussion about the relevance of these findings for shaping cultural attitudes toward technology and the role that public policy can play in fostering innovation, growth, and ongoing improvements in the quality of life of citizens.

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that, “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption, Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered, “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses. Continue reading →

Over at the Mercatus Center Bridge blog, Trace Mitchell and I just posted an essay entitled, “A Non-Partisan Way to Help Workers and Consumers,” which discusses the new Federal Trade Commission’s (FTC) Economic Liberty Task Force report on occupational licensing.

We applaud the FTC’s calls for greater occupational licensing uniformity and portability, but regret the missed opportunity to address root problem of excessive licensing more generally. But while FTC is right to push for greater occupational licensing uniformity and portability, policymakers need to confront the sheer absurdity of licensing so many jobs that pose zero risk to public health & safety. Licensing has become completely detached from risk realities and actual public needs.

As the FTC notes, excessive licensing limits employment opportunities, worker mobility, and competition while also “resulting in higher prices, reduced quality, and less convenience for consumers.” These are unambiguous facts that are widely accepted by experts of all stripes. Both the Obama and Trump Administrations, for example, have been completely in league on the need for comprehensive  licensing reforms. Continue reading →