Articles by Anne Hobson

Anne HobsonAnne Hobson is a Program Manager for Academic & Student Programs at the Mercatus Center at George Mason University. Previously, she worked as a technology policy fellow for the R Street Institute and a public policy associate at Facebook. Anne is currently pursuing a PhD in economics from George Mason University. She received her MA in applied economics from George Mason University and her BA in International Studies from Johns Hopkins University.


by Walter Stover and Anne Hobson

Franklin Foer’s article in the Atlantic on Jeff Bezos’s master plan offers insight into the mind of the famed CEO, but his argument that Amazon is all-powerful is flawed. Foer overlooks the role of consumers in shaping Amazon’s narrative. In doing so, he overestimates the actual autonomy of Bezos and the power of Amazon over its consumers. 

The article falls prey to an atomistic theory of Amazon. The thinking goes like this: I am an atom, and Amazon is a (much) larger atom. Because Amazon is so much larger than I am, I need some intervening force to ensure that Amazon does not prey on me. This intervening force must belong to an even larger atom (the U.S. government) in order to check Amazon’s power. The atomistic lens sees individuals as interchangeable and isolated from each other, able to be considered one at a time.

Foer’s application of this theory appears in his treatment of Hayek, one of the staunchest opponents of aggregation and atomism. For example, when he summarizes Hayek’s paper “The Use of Knowledge in Society,” he phrases Hayek’s argument as that “…no bureaucracy could ever match the miracle of markets, which spontaneously and efficiently aggregate the knowledge of a society.” Hayek found the notion of aggregation highly problematic, as seen in another of his articles, “Competition as a Discovery Procedure,” in which he criticizes the idea of a “scientific” objective approach to measuring market variables. His argument against trying to build a science on macroeconomic variables notes that “…the coarse structure of the economy can exhibit no regularities that are not the results of the fine structure… and that those aggregate or mean values… give us no information about what takes place in the fine structure.”

Neither Amazon nor the market can aggregate the knowledge of a society. We can try to speak of the market in aggregate terms, but we end up summing up all of the differences between individuals and concealing the action and agency of the individuals at the bottom. We cannot speak of market activity without reference to the patterns of individual interactions. It is best to think of the market as an emergent, unintended outcome of a constellation of individual actors, not atoms, each of whom have different talents, wants, knowledge, and resources. Actors enter into exchanges with each other and form complicated, semi-rigid, multi-leveled social networks.

Continue reading →

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology. Continue reading →

-Coauthored with Mercatus MA Fellow Walter Stover

Imagine visiting Amazon’s website to buy a Kindle. The product description shows a price of $120. You purchase it, only for a co-worker to tell you he bought the same device for just $100. What happened? Amazon’s algorithm predicted that you would be more willing to pay for the same device. Amazon and other companies before it, such as Orbitz, have experimented with dynamic pricing models that feed personal data collected on users to machine learning algorithms to try and predict how much different individuals are willing to pay. Instead of a fixed price point, now users could see different prices according to the profile that the company has built up of them. This has led the U.S. Federal Trade Commission, among other researchers, to explore fears that AI, in combination with big datasets, will harm consumer welfare through company manipulation of consumers to increase their profits.

The promise of personalized shopping and the threat of consumer exploitation, however, first supposes that AI will be able to predict our future preferences. By gathering data on our past purchases, our almost-purchases, our search histories, and more, some fear that advanced AI will build a detailed profile that it can then use to estimate our future preference for a certain good under particular circumstances. This will escalate until companies are able to anticipate our preferences, and pressure us at exactly the right moments to ‘persuade’ us into buying something we ordinarily would not.

Such a scenario cannot come to pass. No matter how much data companies can gather from individuals, and no matter how sophisticated AI becomes, the data to predict our future choices do not exist in a complete or capturable way. Treating consumer preferences as discoverable through enough sophisticated search technology ignores a critical distinction between information and knowledge. Information is objective, searchable, and gatherable. When we talk about ‘data’, we are usually referring to information: particular observations of specific actions, conditions or choices that we can see in the world. An individual’s salary, geographic location, and purchases are data with an objective, concrete existence that a company can gather and include in their algorithms.

Continue reading →

“You don’t gank the noobs” my friend’s brother explained to me, growing angrier as he watched a high-level player repeatedly stalk and then cut down my feeble, low-level night elf cleric in the massively multiplayer online roleplaying game World of Warcraft. He logged on to the server to his “main,” a high-level gnome mage and went in search of my killer, carrying out two-dimensional justice. What he meant by his exclamation was that players have developed a social norm banning the “ganking” or killing of low-level “noobs” just starting out in the game. He reinforced that norm by punishing the overzealous player with premature annihilation.

Ganking noobs is an example of undesirable social behavior in a virtual space on par with cutting people off in traffic or budging people in line. Punishments for these behaviors take a variety of forms, from honking, to verbal confrontation, to virtual manslaughter. Virtual reality social spaces, defined as fully artificial digital environments, are the newest medium for social interaction. Increased agency and a sense of physical presence within a VR social world like VRChat allows users to more intensely experience both positive and negative situations, thus reopening the discussion for how best to govern these spaces.

Continue reading →