Event Video on Algorithmic Auditing and AI Impact Assessments

by on July 13, 2022 · 0 comments


Upsides:

  • Audits and impact assessments can help ensure organizations live up their promises as it pertains to “baking in” ethical best practices (on issues like safety, security, privacy, and non-discrimination).
  • Audits and impact assessments are already utilized in other fields to address safety practices, financial accountability, labor practices and human rights issues, supply chain practices, and various environmental concerns.
  • Internal auditing / Institute of Internal Auditors (IIA) efforts could expand to include AI risks
  • Eventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues and avoid liability, negative publicity, or other customer backlash.
  • the International Association of Privacy Professionals (IAPP) trains and certifies privacy professionals through formal credentialing programs, supplemented by regular meetings, annual awards, and a variety of outreach and educational initiatives.
  • We should use similar model for AI and start by supplementing Chief Privacy Officers with Chief Ethical Officers.
  • This is how we formalize the ethical frameworks and best practices that have been formulated by various professional associations such as IEEE, ISO, ACM and others.
  • OECD — Framework for the Classification of AI Systems with the twin goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”
  • NIST — AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”
  • These frameworks being developed through a consensus-driven, open, transparent, and collaborative process. Not through top-down regulation.
  • Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that, “By establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”
  • Developers can still be held accountable for violations of certain ethical norms and bast practices both through private and potentially even through formal sanctions by consumer protection agencies (Federal Trade Commission / comparable state offices / by state AGs).
  • EqualAI / WEF — “Badge Program for Responsible AI Governance”
  • field of algorithmic consulting continues to expand (ex: O’Neil Risk Consulting)

Downsides:

  • constitutes a harm or impact in any given context will often be a contentious matter.
  • Auditing algorithms is nothing like auditing an accounting ledger, where the numbers either add up or they don’t.
  • With algorithms there are no binary metrics that can quantify the correct amount of privacy, safety, or security in any given system.
  • E.U. AI act will be a disaster for AI innovation and investment
  • Proposed U.S. Algorithmic Accountability Act of 2022 would require that developers perform impact assessments and file them with the Federal Trade Commission. A new Bureau of Technology would be created inside the agency to oversee the process.
  • If enforced through a rigid regulatory regime and another federal bureaucracy, compliance with algorithmic auditing mandates would likely become a convoluted, time-consuming bureaucratic process. That would likely slow the pace of AI development significantly.
  • Academic literature on AI auditing / impact assessment ignores potential costs; Mandatory auditing and assessments are treated as a sort of frictionless nirvana when we already know that such a process would entire significant costs.
  • Some AI scholars suggest that NEPA should be model for AI impact assessments / audits.
  • NEPA assessments were initially quite short (sometimes less than 10 pages), but today the average length of these statements is more than 600 pages and include appendices that average over 1,000 pages on top of that.
  • NEPA assessments take an average of 4.5 years to complete and that, between 2010 and 2017, there were four assessments that took at least 17 years to complete.
  • Many important public projects never get done or take far too long to complete at considerably higher expenditure than originally predicted.
  • would create a number of veto points that opponents of AI could use to stop much progress in the field. This is the “vetocracy” problem.
  • We cannot wait years or even months for bureaucracies to eventually getting around to formally signing off on audits or assessments, many of which would be obsolete before they were even done.
  • “global innovation arbitrage” problem would kick in: Innovators and investors increasingly relocate to the jurisdictions where they are treated most hospitably.
  • Both parties already accuse digital technology companies of manipulating their algorithms to censor their views.
  • Whichever party is in power at any given time could use the process to politicize terms like “safety,” “security,” and “non-discrimination” to nudge or even force private AI developers to alter their algorithms to satisfy the desires of partisan politicians or bureaucrats.
  • FCC abused its ambiguous authority to regulate “in the public interest” and indirectly censor broadcasters through intimidation via jawboning tactics and other “agency threats.” or “regulation by raised eyebrow”
  • There are potentially profound First Amendment issues in play with the regulation of algorithms that have not been explored here but which could become a major part of AI regulatory efforts going forward.

Summary:

  • Auditing and impact assessments can be a part of a more decentralized, polycentric governance framework.
  • Even in the absence of any sort of hard law mandates, algorithmic auditing and impact reviews represent an important way to encourage responsible AI development.
  • But we should be careful about mandating such things due to the many unanticipated cost and consequences of converting this into a top-down, bureaucratic regulatory regime.
  • The process should evolve gradually and organically, as it has in many other fields and sectors.

Previous post:

Next post: