Fairness Challenges in Artificial Intelligence

Fairness Challenges in Artificial Intelligence

Shuvro Chakrobartty, Omar F. El-Gayar
Copyright: © 2023 |Pages: 18
DOI: 10.4018/978-1-7998-9220-5.ch101
(Individual Chapters)
No Current Special Offers


Fairness is a highly desirable human value in day-to-day decisions that affect human life. In recent years many successful applications of AI systems have been developed, and increasingly, AI methods are becoming part of many new applications for decision-making tasks that were previously carried out by human beings. Questions have been raised: 1) Can the decision be trusted? 2) Is it fair? Overall, are the AI-based systems making fair decisions, or are they increasing the unfairness in society? This article presents a systematic literature review (SLR) of existing works on AI fairness challenges. Towards this end, a conceptual bias mitigation framework for organizing and discussing AI fairness-related research is developed and presented. The systematic review provides a mapping of the AI fairness challenges to components of a proposed framework based on the suggested solutions within the literature. Future research opportunities are also identified.
Chapter Preview


In recent years discrimination through bias in AI systems has made headlines multiple times across multiple industries. For example, in 2018, Amazon’s recruiting algorithm was flagged for penalizing applications that contained the word “women’s” (Dastin, 2018). The AI models were trained to vet applicants by observing patterns in resumes submitted to the company over ten years. Amazon’s AI system had taught itself that male candidates were preferable because most applications came from men, reflecting the tech industry’s male dominance. Bartlett et al. (2021) investigated and found that the mode of lending discrimination has shifted from human bias to algorithmic bias in the USA, where even the online lending backed by algorithmic decision making caused the minority lenders to be charged higher interest rates for African Americans and Latino borrowers.

Key Terms in this Chapter

Counterfactual Fairness: A fairness metric that checks whether a classifier produces the same result for one individual as it does for another individual who is identical to the first, except with respect to one or more sensitive attributes.

Deep Neural Network (DNN): DNNs are ANN of multiple hidden layers.

Predictive Parity: A fairness metric that checks whether, for a given classifier, the precision rates are equivalent for subgroups under consideration.

Machine Learning (ML): ML commonly used alongside AI and is a subset of AI. ML refers to systems that can learn from data, i.e., systems that get smarter by learning over time without direct human intervention.

Fairness Metric: Fairness metrices are measurable notion of “fairness” with a mathematical definition. Some commonly used fairness metrics include equalized odds, predictive parity, counterfactual fairness, demographic parity, etc.

Artificial Neural Network (ANN): ANNs are a class of machine learning algorithms and are at the heart of deep learning. ANNs are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer.

Fairness Constraint: Applying constraints to an algorithm to ensure one or more definitions of fairness are satisfied.

Deep Learning (DL): DL is also ML relying on DNN.

Demographic Parity: A fairness metric that is satisfied if the results of a model's classification are not dependent on a given sensitive attribute.

Equalized Odds: A fairness metric that checks if, for any label and attribute, a classifier predicts that label equally well for all values of that attribute.

Complete Chapter List

Search this Book: