Generative AI in Higher Education

Generative AI in Higher Education

Copyright: © 2024 |Pages: 37
DOI: 10.4018/979-8-3693-0831-8.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter provides a comprehensive exploration of generative artificial intelligence (AI), particularly focusing on its implications and applications in higher education. It discusses the evolution and fundamental concepts of AI, including large language models and their development, emphasizing the intricate processes involved in creating and refining these models. The chapter delves into the ethical considerations and potential biases inherent in AI systems, highlighting the importance of responsible AI development. Moreover, the chapter examines the transformative potential of generative AI in enhancing learning, creativity, and information processing in higher education settings.
Chapter Preview
Top

Generative Ai: An Introduction

The year was 1994 when the Today Show anchors Katie Couric, Bryant Gumbel, and Elizabeth Vargas sat on a couch and tried to answer the question “What is the internet?” The now infamous clip had three of the most trusted names in the daily news, trying to understand the “@” symbol. Bryant Gumble was clearly confused when he said, “I’d always seen the mark but never heard it said and then sounded stupid when I said it ‘violence at NBC’ [email address shows on the screen]. There it is violence…at NBC GE I mean … What is internet, anyway?” The lack of understanding of email and the internet played out for the world to see, but the hosts were grappling with the new technology that most of the world was still unfamiliar with. The video provided a highly entertaining snapshot of a specific moment in time.

We entered the exact unknown on November 30, 2022, when OpenAI released ChatGPT to the public. ChatGPT was not the first large language model (LLM) to be released to the public. The earliest examples of a language model date to 1966 and Joseph Weizenbaum’s ELIZA model. ChatGPT was not the first breakthrough LLM that Google’s bidirectional encoder representations from transformers (BERT) model in 2019. Even though variations in GPT 2 (February 2019) and GPT 3 (June 2020) existed, the public did not notice the models. As reported by Roose (2023), as other companies were going to release their LLMs to the public, executives at OpenAI in November 2022 asked their team to put together a chatbot that could be made publicly available, which would be called “Chat with GPT 3.5.” Weeks later, ChatGPT, which used the 3.5 LLM, was released to the public. ChatGPT had one million registered users in five days and more than 100 million users in just two months. According to Margaret Mitchell, a leading figure in artificial intelligence (AI) ethics and research, and Hugging Face’s Chief Ethics Scientist, noted in an article in Times Magazine, “Most of us are pretty surprised” by how fast ChatGPT was adopted. “The technology wasn’t putting forth any sort of fundamental breakthroughs” (Chow, 2023, para. 8). The difference was that the technology was released to the public, which allowed them to interact with the technology. LLMs have been used by computer scientists and researchers, but ChatGPT has put the power of LLMs in the hands of the public.

As we write here, one year after ChatGPT went public, the landscapes of AI and LLMs changed radically. Our goal in this chapter is to introduce you to generative AI.

Key Terms in this Chapter

Natural Language Processing (NLP): This field of AI focuses on enabling computers to understand, interpret, and respond to human language in a way that is both meaningful and useful.

Artificial Super Intelligence (ASI): ASI is a theoretical form of AI that surpasses human intelligence across all fields, including creativity, general wisdom, and problem solving. It represents a level of intelligence that not only mimics but also exceeds human capabilities in every aspect.

Transfer Learning: The practice of applying knowledge or models developed for one task to a different but related task. This approach is particularly useful for accelerating or improving the performance of AI models.

Large Language Models (LLMs): These are advanced AI models specialized in understanding and generating human language. They are trained on vast amounts of text data and can perform a wide range of language-related tasks.

Neural Networks: Inspired by the human brain, these are a series of algorithms that mimic the operations of the human brain to recognize patterns and solve common problems in AI, such as pattern recognition.

Fine-Tuning: The process of making small adjustments to a pretrained AI model to adapt it for a specific task or improve its performance.

Reinforcement Learning: This type of machine learning in which an agent learns to make decisions by performing actions and receiving feedback from those actions, often in the form of rewards or penalties.

Generative AI: A branch of AI that focuses on creating new content or data that are similar to but not identical to the training data. This includes generating images, text, music, etc.

Singularity: Often associated with the field of AI, it is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, leading to unforeseeable changes in human civilization. This concept is frequently linked to the emergence of AGI and the rapid acceleration of technological progress it could catalyze.

Algorithmic Transparency: This is the principle that the processes and outcomes of AI algorithms should be open and understandable to users and stakeholders. This approach ensures that people can understand how AI systems make decisions.

Machine Learning (ML): A subset of AI where algorithms use data to learn and make predictions or decisions. Instead of being explicitly programmed, these algorithms improve automatically through experience.

Explainable AI (XAI): This type of AI is designed to make its functioning transparent and understandable to humans. XAI aims to explain how and why AI systems make certain decisions.

Generative Adversarial Networks (GANs): These are AI models where two neural networks (the generator and the discriminator) are trained together. The generator creates data, and the discriminator evaluates it, leading to highly realistic generated content.

Deep Learning: A subset of machine learning involving layered neural networks. These networks can learn from large amounts of data and are particularly effective in tasks such as image and speech recognition.

Artificial Narrow Intelligence (ANI): This refers to AI systems that are designed to perform a single task or a limited range of tasks. These systems exhibit intelligence in specific contexts but lack the broader understanding and adaptability of human intelligence.

Privacy and Data Security: These terms refer to the practices and policies that protect data from unauthorized access and ensure its confidentiality and integrity in AI systems.

Artificial General Intelligence (AGI): AGI represents a stage of AI development in which the system possesses the ability to understand, learn, and apply its intelligence to a wide range of problems, similar to the level of a human being. AI can generalize learning and reasoning across diverse domains and is not limited to specific tasks.

Artificial Intelligence (AI): A field of computer science focused on creating machines capable of performing tasks that typically require human intelligence. These tasks include learning, problem solving, perception, and language understanding.

Data Ethics: This examines the moral issues and standards surrounding data, algorithms, and their practices. It is about ensuring fairness, privacy, and integrity in the handling and use of data.

Tensor Processing Unit (TPU): A type of AI accelerator hardware developed by Google specifically for neural network machine learning, offering high-performance computations tailored for TensorFlow and Google’s machine learning framework.

Supervised Learning: A machine learning approach where models are trained on a labeled dataset, which means that the data are already tagged with the correct answer.

Graphics Processing Unit (GPUs): A specialized electronic circuit designed to rapidly process and render graphics and images; this circuit is known for its high parallel processing capability.

Synthetic Data: These data are artificially generated rather than obtained by direct measurement. It is often used for training AI models where real data may be limited or sensitive.

Generative Models: AI models designed to generate new data instances similar to their training data. They are widely used in creative applications such as art and music generation and in data augmentation.

Transformer Models: A revolutionary architecture in NLP, known for handling sequences of data and utilizing attention mechanisms. These models excel in understanding context in text, exemplified by Google’s BERT and OpenAI’s GPT series.

Unsupervised Learning: Unlike supervised learning, this machine learning technique uses data that are not labeled, allowing the algorithm to act on the data without guidance.

Bias in AI: Refers to AI systems displaying prejudice or favoritism, often due to the data they were trained on. This can lead to unfair or discriminatory outcomes.

Dataset: A collection of data that AI models use for training or testing. The quality and diversity of datasets can significantly impact the performance and fairness of AI systems.

Complete Chapter List

Search this Book:
Reset