Anne Hobson – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 06 May 2020 14:18:18 +0000 en-US hourly 1 6772528 Biased AI is More Than a Technical Problem: Building a Process-oriented Policy Approach to AI Governance https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/ https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/#respond Wed, 06 May 2020 14:18:17 +0000 https://techliberation.com/?p=76711 Image Credit: Police Science Innovation

[Co-authored with Walter Stover]

Artificial Intelligence (AI) systems have grown more prominent in both their use and their unintended effects. Just last month, LAPD announced that they would end their use of a predicting policing system known as PredPol, which had sustained criticism for reinforcing policing practices that disproportionately affect minorities. Such incidents of machine learning algorithms producing unintentionally biased outcomes have prompted calls for ‘ethical AI’. However, this approach focuses on technical fixes to AI, and ignores two crucial components of undesired outcomes: the subjectivity of data fed into and out of AI systems, and the interaction between actors who must interpret that data. When considering regulation on artificial intelligence, policymakers, companies, and other organizations using AI should therefore focus less on the algorithms and more on data and how it flows between actors to reduce risk of misdiagnosing AI systems. To be sure, applying an ethical AI framework is better than discounting ethics all together, but an approach that focuses on the interaction between human and data processes is a better foundation for AI policy.

The fundamental mistake underlying the ethical AI framework is that it treats biased outcomes as a purely technical problem. If this was true, then fixing the algorithm is an effective solution, because the outcome is purely defined by the tools applied. In the case of landing a man on the moon, for instance, we can tweak the telemetry of the rocket with well-defined physical principles until the man is on the moon. In the case of biased social outcomes, the problem is not well-defined. Who decides what an appropriate level of policing is for minorities? What sentence lengths are appropriate for which groups of individuals? What is an acceptable level of bias? An AI is simply a tool that transforms input data into output data, but it’s people that give meaning to data at both steps in context of their understanding of these questions and what appropriate measures of such outcomes are.

The Austrian school of economics is well-suited to helping us grapple with these kinds of less well-defined problems. Austrian economists levied a similar critique against mainstream economics, which treated economic outcomes as a technical problem to be solved with specific technical decisions. The Austrians stressed a principle of methodological individualism, which holds that socioeconomic outcomes are ultimately the products of individual decisions, and cannot be acted on directly by technocratic policymakers. Methodological individualism involves the recognition that individuals drive outcomes in two primary aspects: subjective interpretation of their environment, and through interaction with each other and that same environment. We can sum up application of these two aspects to AI systems in two questions: who gets the data, and where does the data go?

It matters who gets the data because the necessity of subjective interpretation will lead different people to reach separate conclusions about the same data. As an example, a set of data on financial variables such as defaults and debt repayment frequency combined with personal characteristics such as race and geographic locations may lead one person to label African-Americans as larger credit risks. Other individuals reading the same data, however, may arrive at a different conclusion: the patterns in this data stem from structural racism that has suppressed income of African American households compared to other households, and do not indicate that they are inherently riskier. The first interpretation would result in biased outcomes from an AI system used to generate predictions of credit risk based on that data, whereas the second interpretation might actually result in beneficial outcomes; for instance, an agency might offer with more lenient terms to these individuals.

The second question of where data goes depends on the interaction of individuals with each other and their environment, which drives the flow of data and also determines how that data is acted upon. In her book Weapons of Math Destruction, Cathy O’Neil offers a perfect example of this when analyzing what went wrong with the LAPD’s use of PredPol, which took in data on past crimes and used it to predict the geographic location of new crimes. Police forces took this data and increased their presence in hot spots of predicted crime, which resulted in a positive feedback loop of more crime data originating in that area (because of increased interaction between police officers and residents of that neighborhood in the form of increased arrests) generating more predictions of crime, leading to over-policing of minority groups. Ultimately, the data went to a police department that unintentionally increased arrests of minority groups.

Together, the subjectivity of data and the importance of interaction get at a core insight of Austrian economics that directly follows the principle of methodological individualism: context matters. If how data is interpreted and used differs from person to person, then the flows of data matter in who gets the data first and how they use it, potentially transforming the data before sending it on. Thinking along these lines shifts us away from focusing on building better, more ethical AI, and more towards trying to better understand the dynamics of data within a system: who is selecting which data to feed into an AI, what data the AI then generates, and most importantly, how that data is then acted upon and by whom. If we don’t take these matters into consideration, we risk myopically focusing on fixes to the AI that will not change outcomes. In the case of PredPol, for example, the AI could have been completely transparent, but the outcome would have been the same because of how police officers were acting on the output data according to their institutional context.

Some experts are already calling for more process-oriented AI governance approaches, including the EU’s High-Level Expert Group on AI and professional services network KPMG. Carolyn Herzog, general counsel and chair of an ethics working group, comes close to the approach we are advocating in stressing that “…data is the lifeblood of AI,” and that we must pay attention to “…issues of how that data is being collected, how it is being used, and how it is being safeguarded.” However, at present, this data-oriented approach is not represented clearly in U.S. policy. Recent AI policy movements, including ethical principles released by the Department of Defense and the Office of Management and Budget’s AI Guidelines, are a good first step but still emphasize the technology more than the data flows, and are limited to the government’s use of AI. Principle 9 of the guidelines, for instance, notes the importance of having controls to ensure “…confidentiality, integrity, and availability of the information stored, processed, and transmitted by AI systems,” but does not extend this to explicitly consider how the data is used after being transmitted.

Moreover, these proposals do not coherently lay out the relationship between data and AI outcomes because they do not give enough emphasis on where data goes and how it is used in context after being transmitted from the AI system. Returning to our earlier point, interactions matter. Take PredPol as an example. Even if we know how data was being collected, stored, and used by PredPol, and by the police department, these two pieces in isolation are not enough to understand the emergent outcome that results from the interaction between these two organizations. The critical driver is the feedback loop that emerges because of the data flows back and forth between PredPol and the police department. Current policy proposals risk overlooking this class of emergent AI outcomes by narrowly focusing on the AI and data practices of just one organization, rather than explicitly drawing our attention to how data circulates in the wider data ecosystem.

What’s needed is a process-oriented, systemic policy approach focused not just on AI, but how data is interpreted and used in context by individuals and organizations on the ground, and how these parties interact with each other. The NTIA would be a good convener for drafting this framework given their success in leading a multi-stakeholder process to build a framework for enhancing cybersecurity. NTIA can use the AI Now Institute’s algorithmic impact assessment as a blueprint. By building a voluntary framework for AI outcomes, the NTIA can serve a dual purpose. First, it can help ease worries over how to stay compliant with best practices; Second, it can help organizations safeguard against unwanted outcomes of AI systems, and more effectively identify and correct problems that do arise instead of depending on outside forensic data analysis after the fact. NTIA can help establish a common language of AI systems between public and private entities that gives concrete steps organizations can take to avoid these outcomes.

]]>
https://techliberation.com/2020/05/06/biased-ai-is-more-than-a-technical-problem-building-a-process-oriented-policy-approach-to-ai-governance/feed/ 0 76711
Building in Accountability for Algorithmic Bias https://techliberation.com/2020/02/17/building-in-accountability-for-algorithmic-bias/ https://techliberation.com/2020/02/17/building-in-accountability-for-algorithmic-bias/#comments Mon, 17 Feb 2020 14:34:37 +0000 https://techliberation.com/?p=76660

– Coauthored with Anna Parsons

“Algorithms’ are only as good as the data that gets packed into them,” said Democratic Presidential hopeful Elizabeth Warren. “And if a lot of discriminatory data gets packed in, if that’s how the world works, and the algorithm is doing nothing but sucking out information about how the world works, then the discrimination is perpetuated.”

Warren’s critique of algorithmic bias reflects a growing concern surrounding our interaction with algorithms every day.

Algorithms leverage big data sets to make or influence decisions from movie recommendations to credit worthiness. Before algorithms, humans made decisions in advertising, shopping, criminal sentencing, and hiring. Legislative concerns center on bias – the capacity for algorithms to perpetuate gender bias, racial and minority stereotypes. Nevertheless, current approaches to regulating artificial intelligence (AI) and algorithms are misguided.

The European Union enacted stringent data protection rules requiring companies to explain publicly how their algorithms make decisions. Similarly, the US Congress has introduced the Algorithmic Accountability Act regulating how companies build their algorithms. These actions reflect the two most common approaches to address algorithm bias of transparency and disclosure. In effect, regulations require companies to publicly disclose the source code of their algorithms and explain how they make decisions. Unfortunately, this strategy would fail to mitigate AI bias as it would only regulate the business model and inner workings of algorithms, rather than holding companies accountable for outcomes.

Research shows that machines treat similarly situated people and objects differently. Algorithms risk reproducing or even amplifying human biases in certain cases. For example, automated hiring systems make decisions at a faster and larger- scale than their human counterparts, making bias more pronounced.

However, research has also shown that AI can be a helpful tool for improving social outcomes and gender equality. For example, Disney uses AI to help identify and correct human biases by analyzing the output of its algorithms. Its machine learning tool allows the company to compare the number of male and female characters in its movie scripts, as well as other factors such as the number of speaking lines for characters based on their gender, race, or disability.

AI and algorithms have the potential to increase social and economic progress. Therefore, policy makers should avoid broad regulatory requirements and focus on guidelines and policies that address harms in specific contexts. For example, algorithms making hiring decisions should be treated differently than algorithms that produce book recommendations.

Promoting algorithmic accountability is one targeted way to mitigate problems with bias. Best practices should include a review process to ensure the algorithm is performing its intended job.

Furthermore, laws applying to human decisions must also apply to algorithmic decisions. Employers must comply with anti-discrimination laws in hiring, therefore the same principle applies to the algorithm they use.

In contrast, requiring organizations to explain how their algorithms work would prevent companies from using entire categories of algorithms. For example, machine learning algorithms construct their own decision-making systems based on databases of characteristics without exposing the reasoning behind their decisions. By focusing on accountability in outcomes, operators are free to focus on the best methods to ensure their algorithms do not further biases and improve the public’s confidence in their systems.

Transparency and explanations have other positive uses. For example, there is a strong public interest in requiring transparency in the criminal justice system. The government, unlike a private company, has constitutional obligations to be transparent. Thus, transparency requirements for the criminal justice system through risk assessments can help prevent abuses of civil rights.

The Trump administration recently released a new policy framework for Artificial Intelligence. It offers guidance for emerging technologies that is both supportive of new innovations and addresses concerns about disruptive technological change. This is a positive step toward finding sensible and flexible solutions to the AI governance challenge. Concerns about algorithmic bias are legitimate. But, the debate should be centered on a nuanced, targeted approach to regulations and avoid treating algorithmic disclosure as a cure. A regulatory approach centered on transparency requirements could do more harm than good. Instead, an approach that emphasizes accountability ensures organizations use AI and algorithms responsibly to further economic growth and social equality.

]]>
https://techliberation.com/2020/02/17/building-in-accountability-for-algorithmic-bias/feed/ 1 76660
Vocational Programs Won’t Hit the Mark in an Ever-changing Job Market https://techliberation.com/2020/02/04/vocational-programs-wont-hit-the-mark-in-an-ever-changing-job-market/ https://techliberation.com/2020/02/04/vocational-programs-wont-hit-the-mark-in-an-ever-changing-job-market/#comments Tue, 04 Feb 2020 13:48:28 +0000 https://techliberation.com/?p=76655

Coauthored with Mercatus MA Fellow Jessie McBirney

Flat standardized test scores, low college completion rates, and rising student debt has led many to question the bachelor’s degree as the universal ticket to the middle class. Now, bureaucrats are turning to the job market for new ideas. The result is a renewed enthusiasm for Career and Technical Education (CTE), which aims to “prepare students for success in the workforce.” Every high school student stands to benefit from a fun, rigorous, skills-based class, but the latest reauthorization of the Carl D. Perkins Act, which governs CTE at the federal level, betrays a faulty economic theory behind the initiative.

Modern CTE is more than a rebranding of yesterday’s vocational programs, which earned a reputation as “dumping grounds” for struggling students and, unfortunately, minorities. Today, CTE classes aim to be academically rigorous and cover career pathways ranging from manufacturing to Information Technology and STEM (science, technology, engineering, and mathematics). Most high school CTE occurs at traditional public schools, where students take a few career-specific classes alongside their core requirements.

In addition to building skepticism toward “college for everyone,” researchers have identified a “skills gap” between what employers want and the skills job-seekers offer. STEM training is a particularly trendy solution. Trump recently signed a presidential memo expanding the National Science Foundation’s STEM education initiatives and Virginia established a STEM Education Commission last year. With its many pathways, local customizability, and promise of immediate income upon graduation, CTE feels like a practical answer for young people and the economy.

As recent changes to the Perkins Act suggest, “alignment” between CTE courses and labor markets is a growing concern. Now, programs applying for federal funds must conduct a “local needs assessment” to ensure their course offerings align with local labor markets. One recent study attempted an early measure of this alignment in several metropolitan areas. Findings are mixed, but the quest for alignment itself shows how hope in career training programs has exceeded good economic sense.

Consider some of the phrases found in states’ CTE mission statements:

“…to prepare students for in-demand, high-skilled, and high-waged jobs.” (MD)

“…relevant experiences leading to purposeful and economically viable careers.” (AZ)

“…meeting the commonwealth’s need for well-trained workers.” (VA)

The desire to parse out an economy and plan accordingly is not new, but there are limits to predicting in-demand skills and future jobs. Friedrich Hayek conceives of the market not as a math problem to deconstruct but as a “discovery procedure.” The market changes, rapidly and unexpectedly, based on information identified only along the way. It is the cumulative and dynamic result of thousands of individual plans coordinating through prices and wages. Thus, a central authority could never collect enough information to make accurate predictions about market outcomes. Aiming at a particular social or economic goal—such as fixing a list of gaps in the labor market—will likely fall short of another outcome we didn’t even consider.

For this reason, Hayek explains in his Constitution of Liberty, flourishing societies must be economically and politically free, and public education should be offered to the extent that it nurtures the independent citizens that a free society requires. Education oriented toward a particular vocational end shortchanges the student. Hayek explains:

“We are not educating people for a free society if we train technicians who expect to be ‘used,’ who are incapable of finding their proper niche themselves … All that a free society has to offer is an opportunity of searching for a suitable position, with all the attendant risk and uncertainty which such a search for a market for one’s gifts must involve.”

(Hayek 1960, 144-45).

Picking training goals for a student body is no guarantee of long term success, and may block even better outcomes. It is no accident that Hayek does not count increased earning potential or national economic strength among the reasons to publicly subsidize education. Instead, he favors general education and literacy for social cohesion and democratic participation. Rising wages for high-demand skills should entice students into sparse job markets without extra encouragement from school programs.

Hayek is not alone in his insistence that individuals are in the best position to choose and experiment with their professions. In The Wealth of Nations, Adam Smith recognizes,

“In  a society where things were left to follow their natural course, where there was perfect liberty, and where every man was perfectly free both to choose what occupation he thought proper, and to change it as often as he thought proper […] every man’s interest would prompt him to seek the advantageous, and to shun the disadvantageous employment.”

(Smith 1776, 151)

Rather than encourage programs to narrowly direct CTE training towards local “needs,” the federal government should focus on clearing barriers to entry into those professions. It can preempt state occupational licensing laws for opticians and interior designers, among other professions. States can follow the lead of Arizona and recognize out-of-state occupational licenses.

It is worth noting that CTE advocates are not attempting to plan the American economy one web-design class at a time. High schoolers earn only 12 percent of their credits from CTE, and some of the most prominent proponents recognize the challenges a changing economy poses. But the language we use will shape our goals over time. Requiring districts to consider “labor market alignment” in their annual CTE budgets is exactly the choosing between different kinds of education Hayek cautions against. Today’s alignment can be tomorrow’s stagnation.

This is not to deny the academic and personal benefits of taking CTE classes. Teenagers who do are more likely to graduate high school, to get a job, and to earn higher wages right away. Other studies suggest non-academic benefits like increased attendance. It makes intuitive sense that students would welcome non-traditional learning opportunities to break up their daily studies, and that their high school experience would be better for it. But by insisting CTE programs be training for certain job categories, we may be selling students short.

]]>
https://techliberation.com/2020/02/04/vocational-programs-wont-hit-the-mark-in-an-ever-changing-job-market/feed/ 1 76655
Amazon and the Diffuse Power of Consumers https://techliberation.com/2019/11/11/amazon-and-the-diffuse-power-of-consumers/ https://techliberation.com/2019/11/11/amazon-and-the-diffuse-power-of-consumers/#comments Mon, 11 Nov 2019 18:59:23 +0000 https://techliberation.com/?p=76638

by Walter Stover and Anne Hobson

Franklin Foer’s article in the Atlantic on Jeff Bezos’s master plan offers insight into the mind of the famed CEO, but his argument that Amazon is all-powerful is flawed. Foer overlooks the role of consumers in shaping Amazon’s narrative. In doing so, he overestimates the actual autonomy of Bezos and the power of Amazon over its consumers. 

The article falls prey to an atomistic theory of Amazon. The thinking goes like this: I am an atom, and Amazon is a (much) larger atom. Because Amazon is so much larger than I am, I need some intervening force to ensure that Amazon does not prey on me. This intervening force must belong to an even larger atom (the U.S. government) in order to check Amazon’s power. The atomistic lens sees individuals as interchangeable and isolated from each other, able to be considered one at a time.

Foer’s application of this theory appears in his treatment of Hayek, one of the staunchest opponents of aggregation and atomism. For example, when he summarizes Hayek’s paper “The Use of Knowledge in Society,” he phrases Hayek’s argument as that “…no bureaucracy could ever match the miracle of markets, which spontaneously and efficiently aggregate the knowledge of a society.” Hayek found the notion of aggregation highly problematic, as seen in another of his articles, “Competition as a Discovery Procedure,” in which he criticizes the idea of a “scientific” objective approach to measuring market variables. His argument against trying to build a science on macroeconomic variables notes that “…the coarse structure of the economy can exhibit no regularities that are not the results of the fine structure… and that those aggregate or mean values… give us no information about what takes place in the fine structure.”

Neither Amazon nor the market can aggregate the knowledge of a society. We can try to speak of the market in aggregate terms, but we end up summing up all of the differences between individuals and concealing the action and agency of the individuals at the bottom. We cannot speak of market activity without reference to the patterns of individual interactions. It is best to think of the market as an emergent, unintended outcome of a constellation of individual actors, not atoms, each of whom have different talents, wants, knowledge, and resources. Actors enter into exchanges with each other and form complicated, semi-rigid, multi-leveled social networks.

 

Foer describes the great power and wealth of “knowledge” that Amazon has acquired:

“Amazon, however, has acquired the God’s-eye view of the economy that Hayek never imagined any single entity could hope to achieve. At any moment, its website has more than 600 million items for sale and more than 3 million vendors selling them. With its history of past purchases, it has collected the world’s most comprehensive catalog of consumer desire, which allows it to anticipate both individual and collective needs. With its logistics business—and its growing network of trucks and planes—it has an understanding of the flow of goods around the world. In other words, if Marxist revolutionaries ever seized power in the United States, they could nationalize Amazon and call it a day.”

“…[Amazon] distributes economists across a range of teams, where they can, among other things, run controlled experiments that permit scientific, and therefore effective, manipulation of consumer behavior.”

Yet, having data (or having PhD economists, for that matter) is not the same as having complete knowledge or predictive power. Again, the atomistic theory reappears in the assumption that the behavior of individuals can be predicted based on past information in the same way we could compute the trajectory of a single billiard ball. The local, dispersed knowledge Hayek discussed in “The Use of Knowledge” is subjectively held in the minds of the actors, and thus inaccessible to outside observers. People do not, in fact, carry around independently fixed utility functions in their heads from which they–or anyone else–can accurately predict what they will choose in the future. 

 Instead, as economist James Buchanan argues, choices are genuine in that “participants do not know until they enter the process what their own choices will be.” In other words, wants are themselves generated in the choosing process. Amazon cannot reverse engineer the process from any single snapshot, or any series of snapshots, because choices are subjective and dynamic. As an example, any behavior–such as targeted ads or preferential pricing–based on attempts to predict future patterns of consumer behavior is itself information that enters into consumer purchasing decisions. 

Even assuming that Amazon could perfectly predict consumer preferences, this would not pose any kind of threat to consumer welfare, because consumers still retain the ability to shop elsewhere. And competition on the retailer front is still rampant. Walmart has 265 million customers each week and 2.2 million global employees. By contrast, Amazon has 105 million prime subscribers and 613,300 global employees. As long as Amazon cannot exercise coercive authority over consumers, it remains unclear why, exactly, increased predictive power would damage consumer welfare. 

When discussing the high level of trust consumers have in Amazon, Foer argues that “…while Amazon is trusted, no countervailing force has the inclination or capacity to restrain it.” 

Foer again ignores the diffuse countervailing power of consumers, somewhat similar in fashion to Hayek’s notion of dispersed knowledge. Individuals may not be able to exercise much power by themselves against Amazon, but they wield extraordinary power when taking the entire constellation of actors into consideration. This is not the same thing as a collective, organized response such as unions or consumer welfare protection organizations such as the Better Business Bureaus, though these organizations doubtless have very important roles to play. 

In the same way that the market order is a spontaneous outcome of individuals pursuing their own interests, the decentralized actions of consumers in the market similarly can result in the fortunes or fall of a firm without the need to organize everyone involved. Every choice made in the market on the margin is a signal to the firm. A decision to buy books from other websites (for example, from Alibris and not Amazon) is a minute manifestation of this diffuse power of consumers. One signal by itself does not carry much leverage, but when taken in totality, they constitute an ordered force that exerts powerful feedback on a firm’s actions. 

Such a theory of diffuse power can appear profoundly unsatisfactory. We tend to favor narratives of Davids versus Goliaths in part perhaps because we are unaccustomed to trying to think about spontaneous orders in general. In the long run, Amazon will live or die, but not according to the schedule or preferences of a single consumer. 

Diffuse power might seem to be a weak check, but what is the alternative? There are risks to policy intervention (as Foer points out, Marxists could nationalize Amazon and then try to use the consolidated information to run the economy, with disastrous results). Policymakers could restrict Amazon’s growth or activity in a way that limits innovation. It could step in too early and prevent consumer signals from running their course and changing Amazon’s direction to align more with our wants. 

Consumer signals in totality do serve as a countervailing force. And the evidence is in Foer’s article. Amazon (and Bezos in particular) is obsessed with consumer satisfaction and we all benefit from his obsession.

Note: This piece is part three of a series on the epistemic limitations of Artificial Intelligence. Part one on “The Limits of AI in Predicting Human Action” of that series can be found here. Part two on “Amazon, Artificial Intelligence, and Digital Market Manipulation” can be found here.

]]>
https://techliberation.com/2019/11/11/amazon-and-the-diffuse-power-of-consumers/feed/ 1 76638
Amazon, Artificial Intelligence, and Digital Market Manipulation https://techliberation.com/2019/05/20/amazon-artificial-intelligence-and-digital-market-manipulation/ https://techliberation.com/2019/05/20/amazon-artificial-intelligence-and-digital-market-manipulation/#comments Mon, 20 May 2019 19:45:39 +0000 https://techliberation.com/?p=76485

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology.

On the surface, it seems reasonable to suppose that knowledge about consumer preferences leads directly to coercive power. The more I know about a consumer’s weaknesses, the more I can predict their behavior and exploit such knowledge. However, let’s assume that companies possess perfect knowledge of consumer preferences through advanced AI technology. Even with this knowledge, Amazon needs to worry about setting prices too high, because customers can always vote with their feet and leave for another competitor. The threat of consumer choice appears to affect Amazon’s behavior; anecdotal evidence from former Amazon employees suggests that the company’s pricing algorithm finds and applies the lowest prices it can find from its competitors. If Amazon possessed true coercive power through AI, why would it use these algorithms for the benefit of the consumer unless it was worried about losing them.

In the previous scenario, knowledge of consumer preferences does not translate into monopoly prices. To see where it does potentiallytranslate, assume a different institutional context where Amazon is granted a government monopoly over all sales in the United States, preventing any competitors from entering the market. In this scenario, Amazon is free to use their greater knowledge gained from AI to set monopoly prices above what consumers might ordinarily be willing to pay.

These hypotheticals illustrate that the value of knowledge differs depending on the institutional context. In a competitive market, consumers can walk away at any time from proposed terms, no matter how much information the other party might have about their preferences. Note, however, that knowledge by itself does not grant the ability to set monopoly prices in our model; coercive power does not manifest as a result of knowledge, but because power is bestowed by the government.

Surprisingly, this suggests that knowledge and coercive capability do not necessarily correlate with each other as strongly as some might think. Indeed, the best examples we see of coercive market behavior have little to do with sophisticated AI technology, and more to do with asymmetries between the company and its consumers that grant it actual coercive power such as price hikes in pharmaceuticals for consumers that have no alternatives. Note that at least part of the reason for this coercive power is because of government regulation and licensing practices.

In fact, knowledge of others’ preferences is more valuable in non-coercive settings. It is precisely because I cannot force others to do my will that I must worry about what they want, and what they desire. As Adam Smith wrote, “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest.” If instead I was the absolute dictator of everything, what need would I have to understand the self-interest of the butcher? Much literature has been devoted to the impossibility of economic calculation in centrally planned economies, precisely because a coercive form of decision-making has supplanted a non-coercive, distributed form, eradicating the economic value of local knowledge. Hayek and Don Lavoie, among others, have made it clear that centralized planning is infeasible because the economy itself is an emergent property of dispersed knowledge that cannot be collected by a central planner, much less a company with large market-share.             

If coercive power is absent, a company must appeal to consumers’ self-interest, no matter how much knowledge it may have. As long as Amazon operates in an environment where consumers can select other options, it must appeal to those consumers and provide value to maintain its market share. In fact, this greater understanding of consumer preferences is valuable to Amazon precisely because it is unable to exert coercive power. Instead of rushing to regulate the use of AI in business as a threat to consumer welfare, then, we may want to first examine the institutional environment in which a company is operating to understand if there’s any real coercive power present. It might be the case that the company is gathering that knowledge precisely because of the laws of the market, not in spite of them.

Note: This piece is part two of a series on the epistemic limitations of Artificial Intelligence. Part one of that series can be found here.

]]>
https://techliberation.com/2019/05/20/amazon-artificial-intelligence-and-digital-market-manipulation/feed/ 1 76485
The Limits of AI in Predicting Human Action https://techliberation.com/2019/02/08/the-limits-of-ai-in-predicting-human-action/ https://techliberation.com/2019/02/08/the-limits-of-ai-in-predicting-human-action/#comments Fri, 08 Feb 2019 17:00:05 +0000 https://techliberation.com/?p=76454

-Coauthored with Mercatus MA Fellow Walter Stover

Imagine visiting Amazon’s website to buy a Kindle. The product description shows a price of $120. You purchase it, only for a co-worker to tell you he bought the same device for just $100. What happened? Amazon’s algorithm predicted that you would be more willing to pay for the same device. Amazon and other companies before it, such as Orbitz, have experimented with dynamic pricing models that feed personal data collected on users to machine learning algorithms to try and predict how much different individuals are willing to pay. Instead of a fixed price point, now users could see different prices according to the profile that the company has built up of them. This has led the U.S. Federal Trade Commission, among other researchers, to explore fears that AI, in combination with big datasets, will harm consumer welfare through company manipulation of consumers to increase their profits.

The promise of personalized shopping and the threat of consumer exploitation, however, first supposes that AI will be able to predict our future preferences. By gathering data on our past purchases, our almost-purchases, our search histories, and more, some fear that advanced AI will build a detailed profile that it can then use to estimate our future preference for a certain good under particular circumstances. This will escalate until companies are able to anticipate our preferences, and pressure us at exactly the right moments to ‘persuade’ us into buying something we ordinarily would not.

Such a scenario cannot come to pass. No matter how much data companies can gather from individuals, and no matter how sophisticated AI becomes, the data to predict our future choices do not exist in a complete or capturable way. Treating consumer preferences as discoverable through enough sophisticated search technology ignores a critical distinction between information and knowledge. Information is objective, searchable, and gatherable. When we talk about ‘data’, we are usually referring to information: particular observations of specific actions, conditions or choices that we can see in the world. An individual’s salary, geographic location, and purchases are data with an objective, concrete existence that a company can gather and include in their algorithms.

Not all data, however, exist objectively. Individuals do not make choices based on preset, fixed rankings, but ‘color’ their decisions with subjective interpretation of the information available to them. When you purchase a Kindle, for instance, perhaps you are purchasing it because you travel frequently and can’t take a lot of physical books with you. This subjective plan is not directly available and recordable; only the actual purchase shows up as a data point. Machine learning algorithms make predictions based on second-hand, objective data that cannot perfectly reflect the subjective data or knowledge that the individual used to make their decision. Unlike information, knowledge is contextual and is generated from an individual’s interpretation of information against the background of conditions particular to their local time and place.

This does not make prediction impossible; if the actions and decisions of others held no useful information content, the price system as a whole would not function. AI can still assist companies with making predictions, but the contextual nature of knowledge simply restricts the kind of prediction it can make. In 1974, economist F.A. Hayek distinguished between pattern predictions about broad trends in systems and point predictions about what a particular individual or component of the system might do next. If we think about pattern and point predictions, we often think of the difference between the two as a technological problem. But the problem is not a technological one, but an epistemic one. As Don Lavoie put it in National Economic Planning:

“The knowledge relevant for economic decision-making exists in a dispersed form that cannot be fully extracted by any single agent in society. But such extraction is precisely what would be required if this knowledge were to be made usable.”

[Lavoie, Don. 1986. National Economic Planning: What is Left. Page 56]

Let’s assume for a second that AIs could possess not only all relevant information about an individual, but also that individual’s knowledge. Even if companies somehow could gather this knowledge, it would only be a snapshot at a moment in time. Infinite converging factors can affect one’s next decision to not purchase a soda, even if your past purchase history suggests you will. Maybe you went to the store that day with a stomach ache. Maybe your doctor just warned you about the perils of high fructose corn syrup so you forgo your purchase. Maybe an AI-driven price raise causes you to react by finding an alternative seller.

In other words, when you interact with the market—for instance, going to the store to buy groceries—you are participating in a discovery process about your own preferences or willingness to pay. Every decision emerges organically from an array of influences both internal and external that exist at that given moment. The best that any economic decision-maker can do, including Amazon, is to make pattern predictions using stale data that cannot predict these organic decisions and thus have no guarantee of persisting into the future. AI can be thought of as a technology that reduces the cost of pattern predictions by better collecting and interpreting the available data—but the data that would enable either humans or machines to make point predictions simply does not exist.

When we make grand claims about AI’s ability to price products as Uber does, we forget about the role of human action in consuming these services. As Will Rinehart argues, “prices convey information, which then allows for individual participants to act.” The point is that no matter how much information companies collect, and how sophisticated AI becomes, consumer preferences are not something determined ahead of time that exist concretely for the AI to discover. The data predicting these exact choices don’t exist, because the patterns of choices made by individuals are defined by the process of exchange and interaction itself. As long as the competitive forces that drive this process continue to exist, we need not fear dynamic pricing models will erode consumer welfare.

In short, choice is genuine and powerful; we don’t carry around a static schedule in our heads of what prices we are willing to pay for which goods under specific circumstances. Instead, we make choices based on our knowledge and unintentionally reveal our preferences, not just to others, but often ourselves as well. As economist James Buchanan states, market “participants do not know until they enter the process what their own choices will be.” Our preferences, such as they are, are continually created and updated in the process of interaction itself. People’s preferences are consequently moving targets, and cannot be accurately forecasted by AI based on data reflecting past choices.

What do these insights mean for discussions on protecting consumers from exploitative manipulation from companies such as Amazon? First, the epistemic obstacles faced by algorithms means that worst-case scenarios will not likely come about. Instead, the benefits of algorithmic dynamic pricing will outweigh the societal costs. For example, consumers benefit from the Google Chrome add-on Honey, which combs the web for the best coupons to apply when checking out any given product.

Policymakers should be wary of regulating companies to protect consumers against a threat that might not appear. If consumers choose to use platforms such as Amazon or Spotify that gather personal data, we should not automatically assume these algorithms will erode consumer welfare. If policymakers rush to protect consumers because we’re overestimating the forecasting capabilities of AI and underestimating the entrepreneurial capability of individuals in the market, they risk stifling the boon to consumers borne from technological innovations in AI. Policymakers should instead leave room to let individuals and firms work out the best tradeoff between privacy and tailored customer services.

]]>
https://techliberation.com/2019/02/08/the-limits-of-ai-in-predicting-human-action/feed/ 1 76454
Governing Virtual Reality Social Spaces https://techliberation.com/2018/03/05/governing-virtual-reality-social-spaces/ https://techliberation.com/2018/03/05/governing-virtual-reality-social-spaces/#respond Mon, 05 Mar 2018 14:56:34 +0000 https://techliberation.com/?p=76241

“You don’t gank the noobs” my friend’s brother explained to me, growing angrier as he watched a high-level player repeatedly stalk and then cut down my feeble, low-level night elf cleric in the massively multiplayer online roleplaying game World of Warcraft. He logged on to the server to his “main,” a high-level gnome mage and went in search of my killer, carrying out two-dimensional justice. What he meant by his exclamation was that players have developed a social norm banning the “ganking” or killing of low-level “noobs” just starting out in the game. He reinforced that norm by punishing the overzealous player with premature annihilation.

Ganking noobs is an example of undesirable social behavior in a virtual space on par with cutting people off in traffic or budging people in line. Punishments for these behaviors take a variety of forms, from honking, to verbal confrontation, to virtual manslaughter. Virtual reality social spaces, defined as fully artificial digital environments, are the newest medium for social interaction. Increased agency and a sense of physical presence within a VR social world like VRChat allows users to more intensely experience both positive and negative situations, thus reopening the discussion for how best to govern these spaces.

When the late John Perry Barlow, the founder of the Electronic Frontier Foundation, published his declaration of the Independence of Cyberspace in 1996, humanity stood on the frontier of an online world bereft of physical borders and open to new emergent codes of conduct. He wrote, “I declare the global social space we are building to be naturally independent of the tyrannies [governments] seek to impose on us.” He also stressed the role of “culture, ethics and unwritten codes” in governing the new social society where the First Amendment served as the law of the virtual land. Yet, Barlow’s optimism about the capacity of users to build a better society online stands in stark contrast to current criticisms of social platforms as cesspools of misinformation, extremism, and other forms of undesirable behavior.

As the result of VRChat’s largely open-ended design and its wide user base from the PC and headset gaming communities, there is a broad spectrum of user behavior.  On one hand, users experienced virtual sexual harassment and the incessant trolling of mobs of poorly rendered “echidna” consistent with the Ugandan Knuckles meme. However, VRChat is also the source of creativity and positive experience including collective concerts and dance parties. When a player suffered a seizure in VRChat, players stopped and waited to make sure he was okay and sanctioned other players who were trying to make fun of the situation. VRChat’s response to social discord provides a good example of governance in virtual spaces and how layers of governance interact to improve user experiences.

Governance is the process of decision-making among stakeholders involved in a collective problem that leads to the production of social norms and institutions. In virtual social spaces such as VRChat, layers of formal and informal governance are setting the stage for norms of behavior to emerge. The work of political scientist Elinor Ostrom provides a framework through which to understand the evolution of rules to solve social problems. In her research on governing a common resource, she emphasized the importance of including multiple stakeholders in the governing process, instituting a mechanism for dispute resolution and sanctioning, and making sure the rules and norms that emerge are tailored to the community of users. She wrote, “building trust in one another and developing institutional rules that are well matched to the ecological systems being used are of central importance for solving social dilemmas.” Likewise, the governance structure that emerge in VRChat is game-specific and dependent on the enforcement of explicit formal and informal laws, physical game design characteristics, and social norms of users. I delve into each layer of governance in turn.

At the highest level, the U.S. government passed formal laws and policies that affect virtual social spaces. For example, the Computer Fraud and Abuse Act governs computer-related crimes and prohibits unauthorized access to user’s accounts. Certain types of content, such as child pornography, are illegal under federal law. At the intersection of VR video games and intellectual property law, publicity rights govern the permissions process for using celebrities’ likenesses in an avatar. Trademark and copyright laws determine limitations on what words, phrases, symbols, logos, videos or music can be reproduced in VR and what is considered “fair use.”

Game designers and gaming platforms can also employ an explicit code of conduct that goes beyond formal federal laws and policies. For example, VRChat’s code of conduct details proper Mic etiquette and includes rules about profanity, sexual conduct, self-promotion and discrimination. Social platforms rely on a team of enforcers. VRChat has a moderation team that monitors virtual worlds constantly. External reviewers look at flagged content and in-game bouncers monitor behavior in real time and remove the bad eggs.

By virtue of their technical decisions, game designers also govern the virtual spaces they create. For example, the design decision to put a knife or a banana in a VR social space will affect how users behave. VRChat has virtual presentation rooms, court rooms and stages that prompt users to do anything from singing, to stand-up comedy to prosecuting other users in fake trials. Furthermore, game designers can include in-game mechanisms to empower users to flag inappropriate behavior or mute obnoxious players, a function that exists in VRChat.

Earning a reputation for malfeasance and poor user experience is bad business for VRChat, so the company recently re-envisioned their governance approach. They acknowledge their task in an open letter to their users: “One of the biggest challenges with rapid growth is trying to maintain and shape a community that is fun and safe for everyone. We’re aware there’s a percentage of users that choose to engage in disrespectful or harmful behavior…we’re working on new systems to allow the community to better self-moderate and for our moderation team to be more effective.” The memo detailed where users could provide feedback and ideas to improve VRChat, suggesting that users can be actively involved in the rule-making process.

In Elinor Ostrom’s nobel-prize lecture she criticizes the oft-made assumption that enlightened policymakers or external designers should be the ones “to impose an optimal set of  rules on individuals involved.” Instead, she argued that the self-reflection and creativity of those users within a game could serve “to restructure their own patterns of interaction.” The resulting social norms are a form of governance at the most local level.

Ostrom’s framework demonstrates that good social outcomes emerge through our collective actions, which are influenced by top-down formal rules from platforms and bottom-up norms from users. The goal of stakeholders involved in social VR should be to foster the development of codes of conduct that bring out the best in humanity. Governance in virtual worlds is a process, and players in social spaces have a large role to play. Are you ready for that responsibility, player one?

]]>
https://techliberation.com/2018/03/05/governing-virtual-reality-social-spaces/feed/ 0 76241