Ethical Navigations: Adaptable Frameworks for Responsible AI Use in Higher Education

Ethical Navigations: Adaptable Frameworks for Responsible AI Use in Higher Education

Copyright: © 2024 |Pages: 25
DOI: 10.4018/979-8-3693-1565-1.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In an era driven by digital innovation, artificial intelligence (AI), or Generative AI, emerges as a transformative force reshaping the landscape of higher education. Its potential to personalize learning experiences, bolster research capacities, and streamline administrative operations is revolutionary. However, the integration of Open AI into academia raises complex ethical issues for faculty and learners. The need for comprehensive ethical guidelines is imperative to ensure that the integration and utilization of AI in higher education are aligned with the core values of academic integrity and social responsibility. This chapter examines the ethical frameworks essential for governing the use of generative AI technologies in academia and provides practical recommendations for stakeholders involved. Additionally, emerging AI technologies such as experimental NotebookLM and Gemini will be discussed as future directions for AI use in teaching, learning, and research.
Chapter Preview
Top

Background

This chapter aims to guide Higher Education Institutions through the complex ethical landscape of Artificial Intelligence. Combining a conceptual analysis of AI's evolving role in colleges and universities with an extensive literature review, this research explores the necessity for a proactive approach in HEIs regarding the ethical applications of AI, particularly focusing on issues like cheating, plagiarism, academic integrity, and student privacy. The literature review synthesizes key findings, leading to a proposed practical framework designed to assist HEIs in developing, revising, or finalizing their AI ethics policies. A wide array of resources dedicated to AI governance and ethics are also provided.

Additionally, the chapter considers future research and the ethical implications of emerging generative AI technologies. A concise overview of Google’s latest Large Language Model (LLM), Gemini, is presented. This AI resource is expected to be released to the public in 2024.

Key Terms in this Chapter

Structured Data: Data that is defined and searchable. This includes data like phone numbers, dates, and product SKUs.

Unstructured Data: Data that is undefined and difficult to search. This includes audio, photo, and video content. Most of the data in the world is unstructured.

Supervised Learning: A type of machine learning in which classified output data is used to train the machine and produce the correct algorithms. It is much more common than unsupervised learning.

Emergent Behavior: Emergent behavior, also called emergence, is when an AI system shows unpredictable or unintended capabilities.

Reinforcement Learning: A type of machine learning in which an algorithm learns by interacting with its environment and then is either rewarded or penalized based on its actions.

Deep Learning: A function of AI that imitates the human brain by learning from how it structures and processes information to make decisions. Instead of relying on an algorithm that can only perform one specific task, this subset of machine learning can learn from unstructured data without supervision.

Computer Vision: Computer vision is an interdisciplinary field of science and technology that focuses on how computers can gain understanding from images and videos.

Machine Learning: A subset of AI that incorporates aspects of computer science, mathematics, and coding. Machine learning focuses on developing algorithms and models that help machines learn from data and predict trends and behaviors, without human assistance.

Image Recognition: The process of identifying an object, person, place, or text in an image or video.

Transfer Learning: A machine learning system that takes existing, previously learned data and applies it to new tasks and activities.

Voice Recognition: A method of human-computer interaction in which computers listen and interpret human dictation (speech) and produce written or spoken outputs. Examples include Apple’s Siri and Amazon’s Alexa, devices that enable hands-free requests and tasks.

ChatBot: A software application that is designed to imitate human conversation through text or voice commands.

AI Ethics: The issues that AI stakeholders such as engineers and government officials must consider ensuring that the technology is developed and used responsibly. This means adopting and implementing systems that support a safe, secure, unbiased, and environmentally friendly approach to artificial intelligence.

Data Science: An interdisciplinary field of technology that uses algorithms and processes to gather and analyze large amounts of data to uncover patterns and insights that inform business decisions.

Algorithm: A sequence of rules given to an AI machine to perform a task or solve a problem. Common algorithms include classification, regression, and clustering.

Turing Test: The Turing test was created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.

Cognitive Computing: Essentially cognitive computing is the same as AI. It is a computerized model that focuses on mimicking human thought processes such as pattern recognition and learning.

Generative AI: A type of technology that uses AI to create content, including text, video, code, and images. A generative AI system is trained using large amounts of data, so that it can find patterns for generating new content.

Big Data: The large data sets that can be studied reveal patterns and trends to support business decisions. It is called “big” data because organizations can now gather massive amounts of complex data using data collection tools and systems. Big data can be collected very quickly and stored in a variety of formats.

Data Mining: The process of sorting through large data sets to identify patterns that can improve models or solve problems.

Prescriptive Analytics: A type of analytics that uses technology to analyze data for factors such as possible situations and scenarios, past and present performance, and other resources to help organizations make better strategic decisions.

Complete Chapter List

Search this Book:
Reset