A. Brief Overview of Cognitive Bias in Human Consciousness
In the rapidly advancing landscape of artificial intelligence, the interplay between cognitive bias and fairness has emerged as a pivotal challenge (Lark, 2023). As AI systems become more integrated into various aspects of our lives, understanding and mitigating cognitive bias is imperative to ensure fairness and ethical use (HBR 2019). This chapter delves into the intricate relationship between cognitive bias and fairness, particularly within AI consciousness.
The term “cognitive bias” describes consistent patterns of judgmental deviance from norms or rationality, frequently resulting from the mind's effort to streamline information processing. When integrated into AI systems, these biases can perpetuate societal inequities, reinforce stereotypes, and compromise the integrity of decision-making processes (Richard L 2017). This chapter explores the different facets of cognitive bias in AI and its profound implications on fairness. Stay tuned as we navigate the landscape of cognitive bias, examine its impact on AI consciousness, and explore strategies to foster fairness in developing and deploying artificial intelligence systems. Based on the sovereign will/choice provided by the individual creator or the need to fulfil a contractual duty, a model is created, and a specific dataset is used. AI bias is the intentional or inadvertent imprinting of human prejudices in several datasets, and the model produces biased outputs due to incorrect interpretations of the training set supplied to the neural network. This input influences the machines similarly to the imprinting process. (Jennifer 2023).
A dataset containing biassed human decisions, historical/social injustices, and disregarded characteristics like gender, ethnicity, or national origin can train a model that contains bias and produces incorrect results (Schwemmer et al., 2020). Bias can be eliminated once ingrained in the algorithm or system via anonymisation, calibration, or detecting the biased source (Venter et al., 2023). However, the world receives the harmed product when prejudice and false information enter the system (Langdon & Coltheart,2000).
According to Osoba et al. (2017), there are still a lot of biases, misrepresentations, and inaccuracies produced by AI. Therefore, the technology may not live up to expectations. In facial recognition, researcher Najibi (2020) contends that expanding the dataset used to train the algorithm would be essential to overcoming AI bias. However, (Gebru et al., 2021) cautioned that the likelihood of inherent biases and misrepresentations increases with the dataset size. They were proven to be right with the amount of false material that ChatGPT-4 is currently producing. According to a 2016 ProPublica analysis, the COMPAS algorithm (Corrections et al. for Other Sanctions) was biased against Black people when it came to recidivism (Brackey, 2019). In its findings, the research notes that: “Black prosecutors were twice as probable as white prosecutors to be classified incorrectly as having a greater likelihood of violent recurrence, and white repeat offenders were incorrectly classified as having a low risk 63.2 per cent frequently compared to black defendants.”
AI could not identify people needing pain medication (Nagireddi et al., 2022). AI has demonstrated a greater rate of systematic discrimination against Blacks than Whites in the loan application and mortgage fields (Zou et al., 2023). The dangers, consequences, and harms to our society (and AI as a technology) outweigh the time and money savings AI was meant to achieve through its initial aims of prediction and problem-solving. Bias in AI must be identified, separated, and remedied. According to Whittaker et al. (2018), bias in AI retards technological growth by fostering prejudice against certain individuals and ideas.