Human Capital & STRATEGY-I


Through a Fair Tech Lens: Illuminating Bias in AI Algorithms

AI is a powerful tool, but outcomes are only as good as the inputs. There is a risk of bias and discrimination in AI systems, making AI auditing a social imperative. -By Lisa Trumbull

Artificial intelligence (AI) auditing is the practice of analyzing algorithms and their decision outcomes to identify any potential biases or disparities. The goal is to ensure AI is making fair and accurate decisions that do not negatively impact specific groups of people.

Auditing is increasingly important as AI is integrated into various industries and decision-making processes, but it is a complex process. Academics and technology experts are developing best practices and new models for keeping the risk of bias minimal. It is a challenge that grows with increasing awareness of the social and organizational impacts of bias in machine learning (ML) and AI.

Fairness, Social Justice, and Equity in AI

In the simplest terms, the audit process involves the auditor determining the scope of the audit, the data used to train the algorithm, and the decision outcomes. Relevant data, including inputs, outputs, and data used to train the algorithm, is collected and analyzed using statistical methods to identify any disparities or biases. The results are interpreted to determine if outcomes or decisions are fair and accurate and if any biases or disparities exist. If they exist, the auditor will recommend changes to the algorithm or data used to train the algorithm to mitigate these issues and then continue monitoring AI’s performance.

The challenge is that AI uses historical data, so bias can creep in any time. The most fundamental challenge is improving data sampling techniques to ensure algorithmic fairness. A group of academics at the University of Southern California published an article on AI bias and fairness in the Proceedings of the 37th AAAI Conference on Artificial Intelligence in which they propose a novel algorithm that would run on an external server for “fairness-aware aggregation” of data for training machine learning and AI models. The organization could then apply its own local debiasing methods when it accesses the server dataset. The purpose is to train algorithmic models to make decisions against certain demographic groups without bias. This approach could mitigate discrimination in areas like healthcare and recruitment. This example of “federated learning” demonstrates the complexity of eliminating bias. A study on algorithmic discrimination in hiring practices found “bias stems from limited raw data sets and biased algorithm designers.” It is not just the data that leads to discrimination in AI outcomes. It is embedded in the development of machine learning (ML), a branch of AI where mathematical models help a computer learn without instruction. There algorithmic bias can enter during dataset construction, the engineer’s formulation, and feature selection. For example, researchers use a 95% confidence level, leaving a one-out-of-twenty chance of bias. This creates the reality that nearly all ML algorithms are using biased databases. Datasets are also frequently created using data on mainstream groups because that is the easiest data to collect.

AI’s potential to cause social harm is great if dataset bias is not addressed. There are also ethical considerations. For example, in the healthcare industry, AI can improve diagnostics and treatment plans, but AI can also perpetuate healthcare disparities based on demographics that include gender, race, ethnicity, socioeconomic status, and more. Bias in AI algorithms can lead to unfair decisions concerning diagnosis and treatment. Regular AI audits and testing for bias are necessary to ensure AI outcomes used for decision-making are fair and just.

The Many Parts of an Effective AI Audit

Olga Mack, Strategic Advisor at MIDAO and a general counsel, operations specialist and tech startup advisor, named 14 AI audit best practices. The first set of practices begins with defining clear objectives, ranging from regulatory compliance to eliminating discrimination in the talent management process. A multidisciplinary team is needed, with members able to bring diverse perspectives. Stakeholders need involvement so auditors can understand the concerns and expectations that may not be apparent from a strictly technical perspective.

AI auditors will identify the tools needed to conduct a thorough audit. Metrics and benchmarks are essential for evaluating the system's performance. Explainability tools help assess the ethical appropriateness of outcomes. Fairness checkers identify if the AI system treats groups of people differently. Vulnerability assessment tools prevent AI system manipulation. Additional best practices include continuous monitoring, re-auditing after significant organizational changes, considering broader ethical and societal implications, establishing feedback loops, protecting data privacy, regularly updating audit practices, utilizing third-party audits, establishing transparent communication, and making actionable recommendations. This gives a general idea of the complex planning and execution an AI audit needs to achieve the desired results.

Most Daunting Obstacle

As the pervasiveness of AI grows, the risk of algorithmic biases causing harm is growing, too. Eliminating system bias is as difficult as eliminating systemic gender and racial bias, says IBM. IBM also quotes authors Michael Chui, James Manyika, and Mehdi Miremadi of McKinsey, who wrote, “Such biases have a tendency to stay embedded because recognizing them, and taking steps to address them, requires a deep mastery of data-science techniques, as well as a more meta-understanding of existing social forces, including data collection. In all, debiasing is proving to be among the most daunting obstacles, and certainly the most socially fraught, to date.”

Companies should develop AI governance policies that drive practices focused on compliance, trust, transparency, efficiency in helping achieve business goals, human reviews of outcomes, and fairness. The challenge is figuring out how to implement the best practices for AI audits to mitigate as much bias as possible now and keep bias out of the system going forward. Unfortunately, bias in AI systems will likely be an ongoing issue for many years.

Auditing AI algorithms is a technical necessity and a moral imperative in the quest for a more just and equitable society. By shining a light on hidden biases and disparities, organizations are empowered to rectify injustices, foster trust, and harness the full potential of AI technology for the betterment of people – all people.