On July 12, I participated in a Bipartisan Policy Center event on “Civil Society Perspectives on Artificial Intelligence Impact Assessments.” It was an hour-long discussion moderated by Michele Nellenbach, Vice President of Strategic Initiatives at the Bipartisan Policy Center, and which also featured Miriam Vogel, President and CEO of EqualAI. We discussed the ins and outs of algorithmic auditing and impact assessments for artificial intelligence. This is one of the hottest topics in the field of AI governance today, with proposals multiplying rapidly in academic and public policy circles. Several governments are already considering mandating AI auditing and impact assessments.
You can watch the entire discussion here, and down below I have included some of my key talking points from the session. I am currently finishing up my next book, which is on how to craft a flexible governance framework for AI and algorithmic technologies. It includes a lengthy chapter on this issue and I also plan on eventually publishing a stand-alone study on this topic.
Upsides:
Algorithmic auditing and AI impact assessments represent an important step toward the professionalization of AI ethics.
- Audits and impact assessments can help ensure organizations live up their promises as it pertains to “baking in” ethical best practices (on issues like safety, security, privacy, and non-discrimination).
- Audits and impact assessments are already utilized in other fields to address safety practices, financial accountability, labor practices and human rights issues, supply chain practices, and various environmental concerns.
- Internal auditing / Institute of Internal Auditors (IIA) efforts could expand to include AI risks
- Eventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues and avoid liability, negative publicity, or other customer backlash.
Build on the IAPP model to help “professionalize” AI ethics in data-driven organizations
- the International Association of Privacy Professionals (IAPP) trains and certifies privacy professionals through formal credentialing programs, supplemented by regular meetings, annual awards, and a variety of outreach and educational initiatives.
- We should use similar model for AI and start by supplementing Chief Privacy Officers with Chief Ethical Officers.
- This is how we formalize the ethical frameworks and best practices that have been formulated by various professional associations such as IEEE, ISO, ACM and others.
AI auditing and impact assessment process can be rooted in the voluntary risk assessment frameworks developed by OECD & NIST
- OECD — Framework for the Classification of AI Systems with the twin goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”
- NIST — AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”
- These frameworks being developed through a consensus-driven, open, transparent, and collaborative process. Not through top-down regulation.
- Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that, “By establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”
- Developers can still be held accountable for violations of certain ethical norms and bast practices both through private and potentially even through formal sanctions by consumer protection agencies (Federal Trade Commission / comparable state offices / by state AGs).
Independent AI auditing bodies are already being formulated and could play an important role in help to further professionalize AI ethics.
- EqualAI / WEF — “Badge Program for Responsible AI Governance”
- field of algorithmic consulting continues to expand (ex: O’Neil Risk Consulting)
Downsides:
Algorithmic audits and impact assessments are confronted with the same sort of definitional challenges that pervade AI more generally.
- constitutes a harm or impact in any given context will often be a contentious matter.
- Auditing algorithms is nothing like auditing an accounting ledger, where the numbers either add up or they don’t.
- With algorithms there are no binary metrics that can quantify the correct amount of privacy, safety, or security in any given system.
Audits and impact assessments should not become a formal regulatory process
- E.U. AI act will be a disaster for AI innovation and investment
- Proposed U.S. Algorithmic Accountability Act of 2022 would require that developers perform impact assessments and file them with the Federal Trade Commission. A new Bureau of Technology would be created inside the agency to oversee the process.
- If enforced through a rigid regulatory regime and another federal bureaucracy, compliance with algorithmic auditing mandates would likely become a convoluted, time-consuming bureaucratic process. That would likely slow the pace of AI development significantly.
- Academic literature on AI auditing / impact assessment ignores potential costs; Mandatory auditing and assessments are treated as a sort of frictionless nirvana when we already know that such a process would entire significant costs.
The National Environmental Policy Act (NEPA) is a bad model for AI impact assessments.
- Some AI scholars suggest that NEPA should be model for AI impact assessments / audits.
- NEPA assessments were initially quite short (sometimes less than 10 pages), but today the average length of these statements is more than 600 pages and include appendices that average over 1,000 pages on top of that.
- NEPA assessments take an average of 4.5 years to complete and that, between 2010 and 2017, there were four assessments that took at least 17 years to complete.
- Many important public projects never get done or take far too long to complete at considerably higher expenditure than originally predicted.
Applying the NEPA model to algorithmic systems would mean that much AI innovation would grind to a halt in the face of lengthy delays, paperwork burdens, and considerable compliance costs.
- would create a number of veto points that opponents of AI could use to stop much progress in the field. This is the “vetocracy” problem.
- We cannot wait years or even months for bureaucracies to eventually getting around to formally signing off on audits or assessments, many of which would be obsolete before they were even done.
Many AI developers would likely look to innovate elsewhere if auditing or impact assessments became a bureaucratic and highly convoluted compliance nightmare like that.
- “global innovation arbitrage” problem would kick in: Innovators and investors increasingly relocate to the jurisdictions where they are treated most hospitably.
Mandated algorithmic auditing could give rise to a final problem: Political meddling.
- Both parties already accuse digital technology companies of manipulating their algorithms to censor their views.
- Whichever party is in power at any given time could use the process to politicize terms like “safety,” “security,” and “non-discrimination” to nudge or even force private AI developers to alter their algorithms to satisfy the desires of partisan politicians or bureaucrats.
- FCC abused its ambiguous authority to regulate “in the public interest” and indirectly censor broadcasters through intimidation via jawboning tactics and other “agency threats.” or “regulation by raised eyebrow”
- There are potentially profound First Amendment issues in play with the regulation of algorithms that have not been explored here but which could become a major part of AI regulatory efforts going forward.
Summary:
- Auditing and impact assessments can be a part of a more decentralized, polycentric governance framework.
- Even in the absence of any sort of hard law mandates, algorithmic auditing and impact reviews represent an important way to encourage responsible AI development.
- But we should be careful about mandating such things due to the many unanticipated cost and consequences of converting this into a top-down, bureaucratic regulatory regime.
- The process should evolve gradually and organically, as it has in many other fields and sectors.