– Coauthored with Anna Parsons
“Algorithms’ are only as good as the data that gets packed into them,” said Democratic Presidential hopeful Elizabeth Warren. “And if a lot of discriminatory data gets packed in, if that’s how the world works, and the algorithm is doing nothing but sucking out information about how the world works, then the discrimination is perpetuated.”
Warren’s critique of algorithmic bias reflects a growing concern surrounding our interaction with algorithms every day.
Algorithms leverage big data sets to make or influence decisions from movie recommendations to credit worthiness. Before algorithms, humans made decisions in advertising, shopping, criminal sentencing, and hiring. Legislative concerns center on bias – the capacity for algorithms to perpetuate gender bias, racial and minority stereotypes. Nevertheless, current approaches to regulating artificial intelligence (AI) and algorithms are misguided.
The European Union enacted stringent data protection rules requiring companies to explain publicly how their algorithms make decisions. Similarly, the US Congress has introduced the Algorithmic Accountability Act regulating how companies build their algorithms. These actions reflect the two most common approaches to address algorithm bias of transparency and disclosure. In effect, regulations require companies to publicly disclose the source code of their algorithms and explain how they make decisions. Unfortunately, this strategy would fail to mitigate AI bias as it would only regulate the business model and inner workings of algorithms, rather than holding companies accountable for outcomes.
Research shows that machines treat similarly situated people and objects differently. Algorithms risk reproducing or even amplifying human biases in certain cases. For example, automated hiring systems make decisions at a faster and larger- scale than their human counterparts, making bias more pronounced.
However, research has also shown that AI can be a helpful tool for improving social outcomes and gender equality. For example, Disney uses AI to help identify and correct human biases by analyzing the output of its algorithms. Its machine learning tool allows the company to compare the number of male and female characters in its movie scripts, as well as other factors such as the number of speaking lines for characters based on their gender, race, or disability.
AI and algorithms have the potential to increase social and economic progress. Therefore, policy makers should avoid broad regulatory requirements and focus on guidelines and policies that address harms in specific contexts. For example, algorithms making hiring decisions should be treated differently than algorithms that produce book recommendations.
Promoting algorithmic accountability is one targeted way to mitigate problems with bias. Best practices should include a review process to ensure the algorithm is performing its intended job.
Furthermore, laws applying to human decisions must also apply to algorithmic decisions. Employers must comply with anti-discrimination laws in hiring, therefore the same principle applies to the algorithm they use.
In contrast, requiring organizations to explain how their algorithms work would prevent companies from using entire categories of algorithms. For example, machine learning algorithms construct their own decision-making systems based on databases of characteristics without exposing the reasoning behind their decisions. By focusing on accountability in outcomes, operators are free to focus on the best methods to ensure their algorithms do not further biases and improve the public’s confidence in their systems.
Transparency and explanations have other positive uses. For example, there is a strong public interest in requiring transparency in the criminal justice system. The government, unlike a private company, has constitutional obligations to be transparent. Thus, transparency requirements for the criminal justice system through risk assessments can help prevent abuses of civil rights.
The Trump administration recently released a new policy framework for Artificial Intelligence. It offers guidance for emerging technologies that is both supportive of new innovations and addresses concerns about disruptive technological change. This is a positive step toward finding sensible and flexible solutions to the AI governance challenge. Concerns about algorithmic bias are legitimate. But, the debate should be centered on a nuanced, targeted approach to regulations and avoid treating algorithmic disclosure as a cure. A regulatory approach centered on transparency requirements could do more harm than good. Instead, an approach that emphasizes accountability ensures organizations use AI and algorithms responsibly to further economic growth and social equality.