Beyond Tools and Procedures: The Role of AI Fairness in Responsible Business Discourse

Beyond Tools and Procedures: The Role of AI Fairness in Responsible Business Discourse

Ivana Bartoletti, Lucia Lucchini
Copyright: © 2022 |Pages: 8
DOI: 10.4018/978-1-7998-8467-5.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

As artificial intelligence (AI) is increasingly being deployed in almost all aspects of our daily lives, the discourse around the pervasiveness of algorithmic tools and automated decision-making appears to be almost a trivial one. This chapter investigates limits and opportunities within existing debates and examines the rapidly evolving legal landscape and recent court cases. The authors suggest that a viable approach to fairness, which ultimately remains a choice that organizations have to make, could be rooted in a new measurable and accountable responsible business framework.
Chapter Preview
Top

Introduction

As Artificial Intelligence (AI) is increasingly being deployed in almost all aspects of our daily lives, the discourse around the pervasiveness of algorithmic tools and automated decision-making appears to be almost a trivial one.

AI solutions are now driving the allocation of resources as well as shape the news and product items that individuals are exposed to: from credit scoring to facial recognition, predictive technologies to identify fraudsters with precision, youth crime prevention tools, algorithm-driven advertising… how far AI can go is already a reality we all live with daily.

The prolific number of cases showing an evident misuse of these technological tools and, often, the personal data within them, are paving the way to novel discussions around ethics and its role in the digital world. Whether through the scraping of the web in search of faces to be used as facial recognition training data (Hill, 2020), or through algorithms automatically rewarding private school pupils with higher grades (Burgess, 2020), the politics of data (and of data classification) has become increasingly impossible to hide and ignore.

Recognizing this allows us – as a society – to question the role that ethics can play around the development, deployment, and use of technology. While public calls for regulatory scrutiny is on the rise and dominates headlines news, we are yet to define how agencies, governments, as well as private sector organizations can provide meaningful notice about an algorithmic decision-making output. This has led to the deployment of flawed automation, the consequences of which ultimately harm trust in technology and hinders human rights by limiting and/or locking individuals out of services and equal opportunities.

Over recent times, we have had litigation in courts (for instance with Deliveroo, Uber and others) that have successfully deliberated on the impacts that misuse of technology can have on individuals. However, as argued by Calo and Citron (2021), the limit with litigation is that it only addresses violations that are currently enshrined in the law. As further argued by Ajunwa (2019), “instead of focusing on novelty, we should focus on salience” (p. 1675), moving the conversation from technical solutions to a technical problem, to reformed public policy to root out and address issues arising from flawed automation.

This chapter’s premise is that the very essence of machine learning is to differentiate, which means that bias lies at the core of this technology. The bias we, as a society, should be mostly concerned about is the one that causes either allocational harm (not allocating a service or a good to someone unfairly) or representational harm (the perpetuation of inequality by, for example, encoding stereotypes in advertising tools).

To an extent, it can be said that the engineering of unfairness is the inevitable outcome of the politics of dataification and data collection, for example, basing decisions around offending using an existing database of offenders is likely to be unfair as the most vulnerable communities are the most surveilled. Therefore, it can be argued that for an organization to make a fair decision, the algorithm (and the data it is fed with) should be actively manipulated to produce a fair outcome. Therefore, the question moves into why organizations would want to opt for fairness, if fairness is financially detrimental to their business. Ultimately, a choice of fairness is inevitably a political, social, and ethical choice as opting for fairness may lead to less efficient outcomes for an organization.

For example, a company that sells house cleaning product, when using ML to identify potential consumers to target, it is likely to optimize its success by targeting those who more frequently by such products, which for historical reasons means women. Should the same company decide to be ‘fair’ and start promoting equally to men and women, the outcome would be more equitable, though possibly less financially viable as less people would click on the ads.

This paper argues that, as opting for fairness may not be the optimal financial solution for an organization, its formalization resides in the responsible business that is gaining ground, amid consumers’ demand for more equity and transparency.

Key Terms in this Chapter

Algorithmic Bias: It refers to the unintended and potentially harmful skewing of algorithmic predictions.

Equality: The belief that all humans are fundamentally equal and deserve equal treatment.

Justice: Adequate adherence to the standards established in a given society.

AI Ethics: The systemic conceptualization of ‘right’ and ‘wrong’ based on five key themes: beneficence, non-maleficence, autonomy, justice, explicability.

Fairness: To be distinguished between the sociological and the mathematical meaning. From the sociological perspective, fairness defines the way some people are being treated in a society and it is heavily based on ethical values.

Equity: The value that drives the reduction of avoidable inequalities between people in society.

Complete Chapter List

Search this Book:
Reset