Organizational Dynamics and Bias in Artificial Intelligence (AI) Recruitment Algorithms

Organizational Dynamics and Bias in Artificial Intelligence (AI) Recruitment Algorithms

Copyright: © 2024 |Pages: 22
DOI: 10.4018/979-8-3693-1970-3.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The integration of artificial intelligence (AI) into recruitment processes has promised to revolutionize and optimize the hiring landscape. However, recent legal proceedings have shed light on the alarming implications of AI algorithms in the employment sector. This chapter delves into a significant case study where African American, Latina American, Arab American, and other marginalized job applicants and employees filed a 100-million-dollar class action lawsuit against a prominent organization, Context Systems. The suit alleges that AI screening tools, entrusted with the crucial task of selecting candidates, have been marred by programming bias, leading to discriminatory outcomes. This case study critically examines the multifaceted problems arising from bias in AI algorithms, revealing their detrimental effects on marginalized communities in the employment sector. By scrutinizing this pivotal case, the authors aim to provide insights into the urgent need for transparency, accountability, and ethical considerations in the development and deployment of AI-driven recruitment tools.
Chapter Preview
Top

Introduction

Do hiring algorithms prevent bias or amplify it? This fundamental question has become a tension between the technology's proponents and skeptics, but arriving at the answer is more complicated.

African American, Latina American, and Arab American applicants, including other marginalized employees, have won a 100-million-dollar class action lawsuit filed against Context Systems, which is a invented name used to protect the privacy of the actual organization. However, in arbitration, the suit alleges that artificial intelligence (AI) screened out job applicants because of bias in the programming algorithm. This technological malpractice of judgment prevented minority applicants and current employees from being interviewed for job opportunities. During the court cross-examining of the evidence, it was proven that the AI recruiting tool used to bias and prescribe program languages, which resulted in discriminating outcomes. In recent years, integrating artificial intelligence (AI) into recruitment processes has promised to streamline and enhance the hiring process. However, the case of African American, Latina American, Arab American, and other marginalized job applicants and employees pursuing a 100-million-dollar class action lawsuit against a prominent organization has revealed the darker side of AI recruitment tools. This paper examines the pivotal dynamics of this case, aiming to unpack the multifaceted problems stemming from bias in AI algorithms and their detrimental effects on marginalized communities in the employment sector.

Discrimination and bias in hiring constitute persistent barriers to achieving equitable and inclusive workplaces. Legal frameworks are pivotal in addressing these issues, providing guidelines and protections for job seekers and employees (Cheung et al., 2016; Huff et al., 2023; Burrell et al., 2021; McLester et al., 2021).

The cornerstone of U.S. anti-discrimination laws is Title VII of the Civil Rights Act 1964. It prohibits discrimination based on race, color, religion, sex, and national origin in employment practices, including hiring. The Age Discrimination in Employment Act (ADEA) addresses age-based discrimination, while the Americans with Disabilities Act (ADA) protects individuals with disabilities from discrimination. Additionally, the Genetic Information Nondiscrimination Act (GINA) prohibits discrimination based on genetic information (Cheung et al., 2016; Huff et al., 2023; Burrell et al., 2021; McLester et al., 2021).

In tandem with federal laws, state legislatures have enacted their own anti-discrimination statutes. These laws often extend protections beyond those provided by federal regulations. State legislation may cover additional protected categories and apply to smaller employers (Cheung et al., 2016; Huff et al., 2023; Burrell et al., 2021; McLester et al., 2021).

The EEOC enforces federal anti-discrimination laws and provides guidelines for employers. It investigates complaints of discrimination and plays a pivotal role in enforcing compliance with Title VII, the ADEA, and the ADA. The EEOC's guidance informs best practices for employers to avoid discriminatory hiring practices (Cheung et al., 2016; Huff et al., 2023; Burrell et al., 2021; McLester et al., 2021).

Two fundamental legal theories that address discrimination in hiring are disparate impact and disparate treatment. Disparate impact occurs when a seemingly neutral employment practice disproportionately impacts a protected group. In contrast, disparate treatment involves intentional discrimination based on a protected characteristic (Cheung et al., 2016; Huff et al., 2023; Burrell et al., 2021; McLester et al., 2021).

Complete Chapter List

Search this Book:
Reset