ChatGPT and Its Ethical Implications on Libraries, Other Institutions, and Society: Is It a Viable Upgrade?

ChatGPT and Its Ethical Implications on Libraries, Other Institutions, and Society: Is It a Viable Upgrade?

Barbara Jane Holland
DOI: 10.4018/979-8-3693-2841-5.ch014
(Individual Chapters)
No Current Special Offers


On March 28, 2023, an open letter titled “Pause Giant A.I. Experiments” was published by the Future of Life Institute, urging A.I. companies to draft a shared set of safety protocols around advanced A.I. development before creating more powerful software that may pose dangers to humanity. A wide range of ethical issues have been raised concerning Open AI's ChatGPT. The use of ChatGPT has demonstrated on numerous occasions that it encourages racial and gender bias. This (AI) chatbot system uses learning models that are not bias-free. The chatbot obeys the algorithm blindly and replies with the requested information when prompted. It cannot tell whether the information is skewed. This chapter examines the ethical implications ChatGPT can have on libraries, other institutions, and society.
Chapter Preview

Background And Literature

The origins of AI and chatbots may be traced back to the 1950s when scientists first investigated the concept of artificial intelligence. Alan Turing’s research in the early 1950s laid the foundation for modern computer science. While John McCarthy coined the term “artificial intelligence” in 1956. Several years later, McCarthy and his colleagues set up the Artificial Intelligence project at MIT.

AI and chatbots can be traced back to the 1950s when scientists first began exploring the concept of artificial intelligence. Early breakthroughs included the construction of the first artificial intelligence program, ELIZA, which was designed to simulate human communication.

Developments in machine learning and natural language processing propelled AI back into the spotlight in the 1990s. Ask Jeeves debuted in 1996 with a unique question-and-answer format that allowed users to get responses using both natural language and keyword searching. In 2006, Jeeves was phased out and renamed, a simple question-and-answer format. exited the search market in 2010, resulting in the loss of 130 search engineering jobs, because it could not compete with more popular search engines such as Google.

On November 30, 2022, a startup called Open AI launched ChatGPT, a sibling model to InstructGPT that is trained to follow instructions in a prompt and deliver a detailed response. The researchers intended to learn about the users' strengths and weaknesses at

OpenAI, of San Francisco, CA, released an AI chatbot called ChatGPT in November 2022. Developed using human feedback and freely accessible, the platform has already attracted millions of interactions (Grant N and Metz C 2022).

ChatGPT (Chat Generative Pre-trained Transformer) is based on the GPT-3 (Generative Pre-trained Transformer 3) large language model (LLM). The huge language model is a deep neural network that is trained with petabytes of data and uses billions of parameters.

The GPT-3 fed 45 TB of text data utilizing 175 billion parameters and was designed to improve task-agnostic performance and even compete with previous state-of-the-art systems.

ChatGPT is a large-scale pre-trained language model that can generate coherent and fluent texts on various topics and domains. ChatGPT can also engage in conversational interactions with human users, providing information, entertainment, and assistance.

ChatGPT must be fine-tuned to minimize the risk of generating offensive, biased, or inappropriate content (A.S.Rao et; al 2023). This involves continuous work on the training data, model architecture, and monitoring mechanisms, (vii) robustness and security: Conversational AI models can be vulnerable to adversarial attacks or malicious inputs.

When presented with a query, ChatGPT will automatically generate a response, which is based on thousands of internet sources, often without further input from the user. Resultantly, individuals have reportedly used ChatGPT to formulate university essays and scholarly articles and, if prompted, the system can deliver accompanying references.

The GPT-3 fed 45 TB of text data using 175 billion parameters and has been developed to enhance task-agnostic and even become competitive with prior state-of-the-art fine-tuning approaches (Brown, et al., 2020). Brown et al., (2020) stated that GPT-3 is ten times more than any previous non-sparse language model. GPT-3 has become the basic NLP engine that runs the recently developed language model ChatGPT which has attracted the attention of various fields including but not limited to education (Williams, 2023; Tate, 2023), and engineering (Qadir, 2022).

Chatbots are already being used by certain scientists as research assistants, according to Nature (2022). These chatbots can help scientists organize their ideas, provide comments on their work, help write code, and summarize study literature.

Key Terms in this Chapter

Ethical Design Framework (EDF): Provides a set of guidelines and questions to help designers and developers consider the ethical implications of their ChatGPT systems throughout the design process.

Ethical Turing Test (ETT): Proposes a method to evaluate the ethical performance of ChatGPT systems by comparing their responses to those of human experts on various ethical dilemmas.

Ethical Chatbot Manifesto (ECM): Outlines a set of principles and values that ChatGPT systems should adhere to respect the rights and dignity of users and third parties.

LLM: Large language model GPT-3- Generative Pre-trained Transformer 3 is an autoregressive language model released in 2020 that uses deep learning to produce human-like text.

Alignment: What we want the model to do ChatGPT (Chat Generative Pre-Trained Transformer The Ethical Impact Assessment (EIA), is a tool to assess the potential ethical impacts of ChatGPT systems on different stakeholders and domains and to identify and mitigate possible risks or harms.

Complete Chapter List

Search this Book: