Algorithms and Bias

Algorithms and Bias

Julie M. Smith
DOI: 10.4018/978-1-7998-3473-1.ch065
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Initially, automated decision-making was seen as a corrective to discrimination: no longer would one biased individual be able to allow his or her prejudices to control decisions about employment, housing, banking, or criminal justice. However, this promise has not been fulfilled. Rather, recent experiences with a variety of platforms and services suggests that algorithms may be reproducing—and in some cases, even amplifying—human biases. This chapter will explore the problem of discriminatory bias in algorithms and propose best practices for minimizing the problem.
Chapter Preview
Top

Background

While there is widespread agreement that algorithmic bias exists, defining it precisely is tricky, as there are different ways to measure bias. Using the example of an algorithm that assesses risk for criminal behavior, one might measure bias in several different ways (Huq, 2019). One could look at aggregate risk scores for various groups and see if they differ. Or, one could determine if the same initial risk score resulted in the same final risk score for people with different demographic characteristics. One could determine whether the rate of false positives and/or false negatives varied for each demographic group. Efforts to improve the fairness of an algorithm on one measure may actually lead to worse performance on another measure. For example, Speicher, et al (2018), borrowed the concept of inequality indices, which are used by economists, and applied them to biased algorithms. This provided a way to quantify bias. But they also found that efforts to minimize between-group bias may actually increase within-group bias.

For purposes of this article, a biased algorithm will be defined as one that that unfairly and/or inaccurately discriminates against a certain person or group of people, especially on the basis of protected categories such as race and/or gender. In some cases (as will be discussed below), the bias is not in the algorithm per se but rather in the data which it uses.

Unfortunately, examples of algorithmic bias are not difficult to find. A study found that names associated with people of color were far more likely to return search results that were negative (in this case, related to arrest records) than neutral or positive. A black-identified name was 25% more likely to return an ad for an arrest record (Sweeney, 2013). In an experience that went viral in 2015, Jacky Alcine and his friend, both African American, were tagged as “gorillas” by Google Photos (Garcia, 2016). Google responded promptly by removing the auto-tags for terms that might be offensive, but they didn’t fix the underlying problem, which was the algorithm’s inability to properly identify people with darker skin (Monea, 2019).

Key Terms in this Chapter

Machine Learning: A process where a computer program “learns” from a data set.

Artificial Intelligence (AI): A process performed by a computer that, if it had been performed by a human, observers would conclude that the process required intelligence.

Internet of Things: Devices (such as household appliances and sensors) that are connected to the internet.

Algorithmic Bias: The intentional or unintentional bias that can result from using an algorithm to make a decision.

Algorithm: A set of rules that a computer follows to generate an outcome.

Big Data: A very large data set; used in a variety of fields to reach decisions.

Complete Chapter List

Search this Book:
Reset