The Limits of AI in Predicting Human Action

by on February 8, 2019 · 0 comments

-Coauthored with Mercatus MA Fellow Walter Stover

Imagine visiting Amazon’s website to buy a Kindle. The product description shows a price of $120. You purchase it, only for a co-worker to tell you he bought the same device for just $100. What happened? Amazon’s algorithm predicted that you would be more willing to pay for the same device. Amazon and other companies before it, such as Orbitz, have experimented with dynamic pricing models that feed personal data collected on users to machine learning algorithms to try and predict how much different individuals are willing to pay. Instead of a fixed price point, now users could see different prices according to the profile that the company has built up of them. This has led the U.S. Federal Trade Commission, among other researchers, to explore fears that AI, in combination with big datasets, will harm consumer welfare through company manipulation of consumers to increase their profits.

The promise of personalized shopping and the threat of consumer exploitation, however, first supposes that AI will be able to predict our future preferences. By gathering data on our past purchases, our almost-purchases, our search histories, and more, some fear that advanced AI will build a detailed profile that it can then use to estimate our future preference for a certain good under particular circumstances. This will escalate until companies are able to anticipate our preferences, and pressure us at exactly the right moments to ‘persuade’ us into buying something we ordinarily would not.

Such a scenario cannot come to pass. No matter how much data companies can gather from individuals, and no matter how sophisticated AI becomes, the data to predict our future choices do not exist in a complete or capturable way. Treating consumer preferences as discoverable through enough sophisticated search technology ignores a critical distinction between information and knowledge. Information is objective, searchable, and gatherable. When we talk about ‘data’, we are usually referring to information: particular observations of specific actions, conditions or choices that we can see in the world. An individual’s salary, geographic location, and purchases are data with an objective, concrete existence that a company can gather and include in their algorithms.

Not all data, however, exist objectively. Individuals do not make choices based on preset, fixed rankings, but ‘color’ their decisions with subjective interpretation of the information available to them. When you purchase a Kindle, for instance, perhaps you are purchasing it because you travel frequently and can’t take a lot of physical books with you. This subjective plan is not directly available and recordable; only the actual purchase shows up as a data point. Machine learning algorithms make predictions based on second-hand, objective data that cannot perfectly reflect the subjective data or knowledge that the individual used to make their decision. Unlike information, knowledge is contextual and is generated from an individual’s interpretation of information against the background of conditions particular to their local time and place.

This does not make prediction impossible; if the actions and decisions of others held no useful information content, the price system as a whole would not function. AI can still assist companies with making predictions, but the contextual nature of knowledge simply restricts the kind of prediction it can make. In 1974, economist F.A. Hayek distinguished between pattern predictions about broad trends in systems and point predictions about what a particular individual or component of the system might do next. If we think about pattern and point predictions, we often think of the difference between the two as a technological problem. But the problem is not a technological one, but an epistemic one. As Don Lavoie put it in National Economic Planning:

“The knowledge relevant for economic decision-making exists in a dispersed form that cannot be fully extracted by any single agent in society. But such extraction is precisely what would be required if this knowledge were to be made usable.”

[Lavoie, Don. 1986. National Economic Planning: What is Left. Page 56]

Let’s assume for a second that AIs could possess not only all relevant information about an individual, but also that individual’s knowledge. Even if companies somehow could gather this knowledge, it would only be a snapshot at a moment in time. Infinite converging factors can affect one’s next decision to not purchase a soda, even if your past purchase history suggests you will. Maybe you went to the store that day with a stomach ache. Maybe your doctor just warned you about the perils of high fructose corn syrup so you forgo your purchase. Maybe an AI-driven price raise causes you to react by finding an alternative seller.

In other words, when you interact with the market—for instance, going to the store to buy groceries—you are participating in a discovery process about your own preferences or willingness to pay. Every decision emerges organically from an array of influences both internal and external that exist at that given moment. The best that any economic decision-maker can do, including Amazon, is to make pattern predictions using stale data that cannot predict these organic decisions and thus have no guarantee of persisting into the future. AI can be thought of as a technology that reduces the cost of pattern predictions by better collecting and interpreting the available data—but the data that would enable either humans or machines to make point predictions simply does not exist.

When we make grand claims about AI’s ability to price products as Uber does, we forget about the role of human action in consuming these services. As Will Rinehart argues, “prices convey information, which then allows for individual participants to act.” The point is that no matter how much information companies collect, and how sophisticated AI becomes, consumer preferences are not something determined ahead of time that exist concretely for the AI to discover. The data predicting these exact choices don’t exist, because the patterns of choices made by individuals are defined by the process of exchange and interaction itself. As long as the competitive forces that drive this process continue to exist, we need not fear dynamic pricing models will erode consumer welfare.

In short, choice is genuine and powerful; we don’t carry around a static schedule in our heads of what prices we are willing to pay for which goods under specific circumstances. Instead, we make choices based on our knowledge and unintentionally reveal our preferences, not just to others, but often ourselves as well. As economist James Buchanan states, market “participants do not know until they enter the process what their own choices will be.” Our preferences, such as they are, are continually created and updated in the process of interaction itself. People’s preferences are consequently moving targets, and cannot be accurately forecasted by AI based on data reflecting past choices.

What do these insights mean for discussions on protecting consumers from exploitative manipulation from companies such as Amazon? First, the epistemic obstacles faced by algorithms means that worst-case scenarios will not likely come about. Instead, the benefits of algorithmic dynamic pricing will outweigh the societal costs. For example, consumers benefit from the Google Chrome add-on Honey, which combs the web for the best coupons to apply when checking out any given product.

Policymakers should be wary of regulating companies to protect consumers against a threat that might not appear. If consumers choose to use platforms such as Amazon or Spotify that gather personal data, we should not automatically assume these algorithms will erode consumer welfare. If policymakers rush to protect consumers because we’re overestimating the forecasting capabilities of AI and underestimating the entrepreneurial capability of individuals in the market, they risk stifling the boon to consumers borne from technological innovations in AI. Policymakers should instead leave room to let individuals and firms work out the best tradeoff between privacy and tailored customer services.

Previous post:

Next post: