Cognitive Bias and Fairness Challenges in AI Consciousness

Cognitive Bias and Fairness Challenges in AI Consciousness

DOI: 10.4018/979-8-3693-2015-0.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As artificial intelligence (AI) continues to permeate various facets of our lives, the intersection of cognitive bias and fairness emerges as a critical concern. This chapter explores the intricate relationship between cognitive biases inherent in AI systems and the pursuit of fairness in their decision-making processes. The evolving landscape of AI consciousness demands a nuanced understanding of these challenges to ensure ethical and unbiased deployment. The presence of cognitive biases in AI systems reflects the data they are trained on. Developing universal standards for fairness that can adapt to diverse contexts remains an ongoing challenge. In conclusion, cognitive bias and fairness in AI consciousness demand a holistic and multidisciplinary approach. Addressing these issues necessitates collaboration between researchers, ethicists, policymakers, and industry. Developing transparent, adaptive, and universally accepted standards for fairness in AI is essential to ensure the responsible and ethical deployment of these technologies in our increasingly interconnected world.
Chapter Preview
Top

I. Introduction

A. Brief Overview of Cognitive Bias in Human Consciousness

In the rapidly advancing landscape of artificial intelligence, the interplay between cognitive bias and fairness has emerged as a pivotal challenge (Lark, 2023). As AI systems become more integrated into various aspects of our lives, understanding and mitigating cognitive bias is imperative to ensure fairness and ethical use (HBR 2019). This chapter delves into the intricate relationship between cognitive bias and fairness, particularly within AI consciousness.

The term “cognitive bias” describes consistent patterns of judgmental deviance from norms or rationality, frequently resulting from the mind's effort to streamline information processing. When integrated into AI systems, these biases can perpetuate societal inequities, reinforce stereotypes, and compromise the integrity of decision-making processes (Richard L 2017). This chapter explores the different facets of cognitive bias in AI and its profound implications on fairness. Stay tuned as we navigate the landscape of cognitive bias, examine its impact on AI consciousness, and explore strategies to foster fairness in developing and deploying artificial intelligence systems. Based on the sovereign will/choice provided by the individual creator or the need to fulfil a contractual duty, a model is created, and a specific dataset is used. AI bias is the intentional or inadvertent imprinting of human prejudices in several datasets, and the model produces biased outputs due to incorrect interpretations of the training set supplied to the neural network. This input influences the machines similarly to the imprinting process. (Jennifer 2023).

A dataset containing biassed human decisions, historical/social injustices, and disregarded characteristics like gender, ethnicity, or national origin can train a model that contains bias and produces incorrect results (Schwemmer et al., 2020). Bias can be eliminated once ingrained in the algorithm or system via anonymisation, calibration, or detecting the biased source (Venter et al., 2023). However, the world receives the harmed product when prejudice and false information enter the system (Langdon & Coltheart,2000).

According to Osoba et al. (2017), there are still a lot of biases, misrepresentations, and inaccuracies produced by AI. Therefore, the technology may not live up to expectations. In facial recognition, researcher Najibi (2020) contends that expanding the dataset used to train the algorithm would be essential to overcoming AI bias. However, (Gebru et al., 2021) cautioned that the likelihood of inherent biases and misrepresentations increases with the dataset size. They were proven to be right with the amount of false material that ChatGPT-4 is currently producing. According to a 2016 ProPublica analysis, the COMPAS algorithm (Corrections et al. for Other Sanctions) was biased against Black people when it came to recidivism (Brackey, 2019). In its findings, the research notes that: “Black prosecutors were twice as probable as white prosecutors to be classified incorrectly as having a greater likelihood of violent recurrence, and white repeat offenders were incorrectly classified as having a low risk 63.2 per cent frequently compared to black defendants.”

AI could not identify people needing pain medication (Nagireddi et al., 2022). AI has demonstrated a greater rate of systematic discrimination against Blacks than Whites in the loan application and mortgage fields (Zou et al., 2023). The dangers, consequences, and harms to our society (and AI as a technology) outweigh the time and money savings AI was meant to achieve through its initial aims of prediction and problem-solving. Bias in AI must be identified, separated, and remedied. According to Whittaker et al. (2018), bias in AI retards technological growth by fostering prejudice against certain individuals and ideas.

Complete Chapter List

Search this Book:
Reset