Analysis of Ethical Development for Public Policies in the Acquisition of AI-Based Systems

Analysis of Ethical Development for Public Policies in the Acquisition of AI-Based Systems

Reinel Tabares-Soto, Joshua Bernal-Salcedo, Zergio Nicolás García-Arias, Ricardo Ortega-Bolaños, María Paz Hermosilla, Harold Brayan Arteaga-Arteaga, Gonzalo A. Ruz
Copyright: © 2022 |Pages: 29
DOI: 10.4018/978-1-6684-5892-1.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The exponential growth of AI and its applications in different areas of society, such as the financial, agricultural, telecommunications, or health sectors, poses new challenges for the government's public sector, mainly in regulating these systems. Governments and entities in general address these challenges by formulating soft laws such as manuals or guidelines. They seek full transparency, privacy, and bias reduction when implementing an AI-based system, including its life cycle and respective data management or governance. These tools and documents aim to develop an ethical AI that addresses or solves the aforementioned ethical implications. The revision of 22 documents within frameworks, guides, articles, toolkits, and manuals proposed by different governments and entities are examined in detail. Analyses include a general summary, the main objective, characteristics to be highlighted, advantages and disadvantages if any, and possible improvements.
Chapter Preview
Top

Introduction

Artificial Intelligence (AI) is heard ever more in different fields of society. It is no longer an indifferent word in humans' daily lives. AI systems are present in financial, automotive, health, science, education, industry, and telecommunications, among other sectors. Its development continues to increase (Berryhill et al., 2019). AI permeates society, and today it is possible to find its uses almost everywhere: translation apps, recommendation systems, e.g., when people perform a search on Google or YouTube, voice assistants such as Siri or Alexa, traceability of optimized traffic routes, and the list goes on, because of this, the demand for AI systems is increasing as AI becomes a technology that specializes in specific tasks. AI promises to generate productivity gains, improve well-being and, in turn, solve complex challenges. It is also precise for their predictions, recommendations, or decisions. Also, it does not require a high economic cost (“Artificial Intelligence in Society,” 2019).

The uses of AI can generate significant advances in different sectors of society. More than 60 countries are developing national AI strategies to maximize their potential. According to the “Government AI Readiness Index 2021”, published by Stanford University, nearly 40% of the 160 countries have published or are drafting national AI strategies (Fuentes et al., 2022). It shows that AI is fast becoming a top concern for leaders globally. The economic sector of the United States, Japan, Germany, Finland, and eight other countries with developed economies would increase annual economic growth rates by approximately two percentage points. At the same time, improve labor productivity by around 40% by 2035 with AI systems implementations (Purdy & Daugherty, 2016; Wirtz et al., 2019). Governments play a crucial role in determining national strategic priorities, public investments, and regulations (Fuentes et al., 2022; “The Strategic and Responsible Use,” 2022). However, the regulatory and social limits for the use of AI are becoming increasingly visible, and governments have two ways to address this problem. The first is the hard law, which are laws or regulations created by governments and are mandatory. However, this process entails high costs of time and economic resources; also, in the end, it does not present quick answers to emerging problems. The second way is soft law, which exists in programs that set high expectations but the governments cannot directly enforce. In other words, they are non-binding and can coexist without jurisdiction and be developed, modified, and adapted by any entity (Gutierrez & Marchant, 2021).

For governments and interested people, using AI systems also implies having a clear notion of the challenges. The development or acquisition of an AI technology poses in all phases of the life cycle the following challenges (“Artificial Intelligence in Society,” 2019):

  • Planning and design.

  • Data collection and processing.

  • Model construction and interpretation.

  • Verification and validation.

  • Deployment.

  • Operation and monitoring.

(Wirtz et al., 2019) proposed four main dimensions that model the challenges: AI technology implementation, AI law and regulation, AI ethics, and finally, AI society, identifying 15 sub-challenges (Wirtz et al., 2019); this reveals the public sector's challenges like the charge of establishing priorities, investments, and national regulations (Berryhill et al., 2019).

In particular, the compendium of associated ethical and regulatory risks or challenges mentioned by different organizations and companies addresses privacy and security, algorithmic discrimination or bias, and transparency or opacity (Buenadicha Sánchez et al., 2019; “Artificial Intelligence in Society,” 2019). All these issues must have a responsible approach, reliability, and explainability. Above all, focus on the target population to design and implement the AI system. Therefore, it must be an ongoing process that identifies trade-offs, mitigates risk and bias, and ensures open and accountable processes or actions (“The Strategic and Responsible Use,” 2022).

Key Terms in this Chapter

Artificial Intelligence (AI): Refers to the set of algorithms or computational methods that aim to give computers the characteristics or abilities of human intelligence.

Soft Laws: Government programs that establishes substantive expectations but are not mandatory such as recommendations, guides, directives, manuals, frameworks, among others.

Parity Metrics: Set of metrics that indicate the state of equality and the quality of the database used.

Deep Neural Networks (DNN): Is a type of deep learning model commonly used for classification tasks. It uses mathematical operations of linear and non-linear functions.

Deep Learning (DL): Corresponds to a subset of artificial intelligence techniques that compromises models based on artificial neural networks.

Convolutional Neural Networks (CNN): It is a type of deep learning model commonly used for images-related tasks. It uses the mathematical operation of convolution to extract features from images.

Data Governance: Is the set of processes, responsibilities, policies, standards, or metrics integrating the availability, usability and security of these in business systems.

Machine Learning (ML): AI subfield and evolving branch of computational algorithms that are designed to emulate human intelligence by learning from the surrounding environment.

Complete Chapter List

Search this Book:
Reset