AI and Social Impact: A Review of Current Use Cases and Broader Implications

AI and Social Impact: A Review of Current Use Cases and Broader Implications

Sandra Moore, Sheena Brown, William Butler
DOI: 10.4018/978-1-7998-8693-8.ch008
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Just because everything can be automated does not mean everything should be. As machine learning and artificial intelligence become intertwined within the global fabric of society, potential societal impacts must be considered. Is it necessary to know someone's sexual orientation? Will that help sell products or pose a threat to that individual? Countries call for such algorithms; however, the literature has shown that current attempts are neither significant nor correct most of the time. Is it important to know one's race? What happens when a person of color is targeted based on biased algorithms by police? Or denied a loan based on biased resourcing that indicates low-income individuals are more likely to offend? These algorithms contain a multitude of bias based on the datasets used. The use of inclusive datasets is necessary to get accurate, unbiased, and therefore viable data to ensure that AI technologies function correctly.
Chapter Preview
Top

Background

The importance of not implementing machine learning as one pleases without regard for the consequences is discussed by Bolukbasi, Chang, Zou, Saligrama, and Kalai (2016), where such practices increase bias in data, which can then result in very real physical harm such as jail time or the denial of a loan (Buolamwini & Gebru, 2018). Studies on word bias have highlighted the use of words that increase gender stereotypes (e.g., a word association for machine learning: woman is to homemaker). These pre-labeled data result in biased data; “algorithms trained with biased data have resulted in algorithmic discrimination” (Buolamwini & Gebru, 2018, p.1; Bolukbasi et al., (2016); Caliskan, Bryson & Narayanan, 2017; Elmi, 2021).

The Gender Shades study by Buolamwini and Gebru (2018) compared three commercial facial recognition technologies, IBM, Microsoft, and FACE++, using three facial analysis datasets IJB-A (U.S. government benchmark used by National Institute of Standards and Technology), Adience, and the dataset Pilot Parliaments Benchmark (PPB) developed by Buolamwini and Gebru. IJB-A was composed of 79.6% lighter individuals and Adience was composed of 86.2% lighter individuals and PPB was composed of 53.6% lighter-skinned individuals. Adience consisted of 2194 separate individuals and only included 302 darker individuals, whereas PPB consisted of 1270 people and included 589 individuals with darker skin. Thus, the PPB dataset had a higher number of darker-skinned individuals than the other two datasets. This comparison was designed to determine each facial recognition technology’s accuracy regarding both the shade of one's skin color and gender. The results highlighted the extreme inaccuracies of facial recognition technology regarding the identification of an individual, performing best for light skinned males and least accurate for darker-skinned females. The error rate of the AI facial recognition software for darker-skinned females was between 20.8% - 34.7%, while for lighter-skinned males the error rate ranged between 0.0-0.3%. Darker-skinned males were still misclassified more often than lighter-skinned males (Buolamwini & Gebru, 2018). Researchers concluded that inclusive benchmark datasets are important to increase transparency and accountability in AI.

From the ADAPT Centre at Dublin City University and IBM Research in Dublin, Sen and Ganguly (2020) discussed the use of a learning framework based on multiple objectives to help ensure that predictions derived from the data do not constitute social biases or stereotypes. The resulting model sought to reduce specific associations between pairs of data points that are ethically incorrect (i.e., black people are more likely to be criminals, and fear is associated with women). The research was based on datasets of emotion prediction and illustrated that the proposed bias-aware learning framework could reduce some cognitive biases from data predictions (Sen & Ganguly, 2020).

Key Terms in this Chapter

Transparency: The ability to see the entire workflow process. An understanding or explain-ability surrounding the decision-making efforts of an algorithm.

Bias: An attitude and/or stereotype that unconsciously affects a person’s perceptions and actions.

Accountability: An entity that is responsible for an action of a tool whether intended or unintended consequence.

Fairness: The equal treatment of all without favoring one. Taking into account all sides to ensure decisions are based off impartial data and not favoring one group over another.

Digital Colonialism: The financial benefits of harvesting, owning, and trading data for the global development of AI.

Artificial Intelligence: A machine or technology that performs automated human like decision making.

Complete Chapter List

Search this Book:
Reset